Ideally problematic payments are caught immediately, but in the real world you end up with a bucket of “clearly shady in retrospect” payments that you have to work with.
In a situation where a fix is available (e.g. refund/cancellation), there’s an interesting timing question.
If you take action immediately, you have the satisfaction of knowing victims are made square, likely before they notice the problem.
If you delay a little, possibly till just before the benefits are reaped, you reduce the speed at which the attackers can learn about your systems.
How do you think about this balance? There’s also the question of if it’s worth investing in systems to fake out scammers by tweaking visibility (similar to shadow-banning).
Double dipping—I’m interested in if you’ve built principles around this, or if it’s something you think about per-incident based on the apparent sophistication of the attackers.
The amount of work given to these questions in industry is far, far higher a) than people model it as and b) that I can conveniently fit in a comment.
To the second question, the answer is Yes. Sometimes this question is answered by plans informed by written principles drawn up well in advance of need and enforced through systems implemented by people, where the actual decisioning substrate might be professionals or might be a computer system. Sometimes it is ad hoc decisionmaking in the moment.
Ideally problematic payments are caught immediately, but in the real world you end up with a bucket of “clearly shady in retrospect” payments that you have to work with.
In a situation where a fix is available (e.g. refund/cancellation), there’s an interesting timing question.
If you take action immediately, you have the satisfaction of knowing victims are made square, likely before they notice the problem.
If you delay a little, possibly till just before the benefits are reaped, you reduce the speed at which the attackers can learn about your systems.
How do you think about this balance? There’s also the question of if it’s worth investing in systems to fake out scammers by tweaking visibility (similar to shadow-banning).
Double dipping—I’m interested in if you’ve built principles around this, or if it’s something you think about per-incident based on the apparent sophistication of the attackers.
The amount of work given to these questions in industry is far, far higher a) than people model it as and b) that I can conveniently fit in a comment.
To the second question, the answer is Yes. Sometimes this question is answered by plans informed by written principles drawn up well in advance of need and enforced through systems implemented by people, where the actual decisioning substrate might be professionals or might be a computer system. Sometimes it is ad hoc decisionmaking in the moment.