CI
Cyber Info
G
Cyber Info
•
3mo ago
Gabryx412_
Need help please
How can we design a machine learning model that is provably immune to all possible adversarial attacks without sacrificing accuracy or efficiency
?
Cyber Info
Join
At Cyber Info, we strive to empower every individual with easy access to cybersecurity education
190,011
Members
View on Discord
Was this page helpful?
Yes
No
G
guninvalid over coax alliance
•
10/6/25, 7:57 PM
:MudWhat
:
G
guninvalid over coax alliance
•
10/6/25, 7:57 PM
uh you can try running it locally
G
guninvalid over coax alliance
•
10/6/25, 7:58 PM
or you can do it without the machine learning and basically just build a normal chatbot
G
Gabryx412_
OP
•
10/6/25, 7:58 PM
I see what you mean
, but I was referring to theoretical robustness
G
Gabryx412_
OP
•
10/6/25, 7:58 PM
as in
, whether it
’s possible to design a model that is provably immune to adversarial perturbations under any distribution
G
guninvalid over coax alliance
•
10/6/25, 7:59 PM
i feel like you don
't understand the question you
're asking
G
guninvalid over coax alliance
•
10/6/25, 7:59 PM
you can make it probably immune to specific attacks
G
guninvalid over coax alliance
•
10/6/25, 7:59 PM
which specific attacks are you trying to mitigate
?
G
Gabryx412_
OP
•
10/6/25, 8:00 PM
Actually
, I do understand the question it
’s a theoretical one
G
Gabryx412_
OP
•
10/6/25, 8:00 PM
I
’m not referring to robustness against a specific class of attacks like FGSM or PGD
G
Gabryx412_
OP
•
10/6/25, 8:00 PM
I mean true
, provable immunity to all possible adversarial perturbations under any data distribution
, without sacrificing model accuracy or efficiency
G
Gabryx412_
OP
•
10/6/25, 8:00 PM
As far as we know
, that
’s mathematically impossible unless you make extremely strong assumptions about the threat model or the data manifold
G
guninvalid over coax alliance
•
10/6/25, 8:01 PM
don
't you also need to make assumptions about what accuracy even is
? how are you going to measure how accurate a model is
?
G
Gabryx412_
OP
•
10/6/25, 8:02 PM
When I said
“without sacrificing accuracy
,
” I meant within the conventional empirical sense
: maintaining comparable performance on clean
, in
-distribution data
G
Gabryx412_
OP
•
10/6/25, 8:02 PM
Even if we fix that definition
, achieving provable immunity to all adversarial perturbations still seems theoretically impossible
G
guninvalid over coax alliance
•
10/6/25, 8:02 PM
i would agree
G
guninvalid over coax alliance
•
10/6/25, 8:02 PM
also this probably isn
't the right server to ask about this
, this isn
't a ML server
G
Gabryx412_
OP
•
10/6/25, 8:04 PM
True
, thanks