d0kefish
d0kefish16mo ago

XSS injection question

I’m creating a very simple website that will use ChatGPTs api. I want to display the content I get from the api in a nice looking way so I figured I could just have ChatGPT add html-tags. However I came to realize that to display what I get back I need to set the response to “safe”. I e it just runs it, this is as I’ve come to understand a risk for XSS injection. How big risk is this? I feel like there’s not very likely I’d get bad code from the api but I cannot say that for sure.
8 Replies
Cyber Forum
Cyber Forum16mo ago
Post created!
🔎 This post has been indexed in our web forum and will be seen by search engines so other users can find it outside Discord 🕵️ Your user profile is private by default and won't be visible to users outside Discord, if you want to be visible in the web forum you can add the "Public Forum Profile" role in <id:customize> ✅ You can mark a message as the answer for your post with Right click -> Apps -> Mark Solution (if you don't see the option, try refreshing Discord with Ctrl + R)
From An unknown user
DirtyJ
DirtyJ16mo ago
Just to make sure I'm on the same page, you're getting a response from ChatGPT that you'd like to display in a web app, so you asked ChatGPT to add HTML formatting. If that's the case, you can ensure the response is thoroughly sanitized (both server-side and client-side) to harden that specific request's displaying against XSS attacks. In addition, take a look through the OWASP cheat sheet for XSS: https://cheatsheetseries.owasp.org/cheatsheets/Cross_Site_Scripting_Prevention_Cheat_Sheet.html
Cross Site Scripting Prevention - OWASP Cheat Sheet Series
Website with the collection of all the cheat sheets of the project.
DirtyJ
DirtyJ16mo ago
If you can provide some more info about the specific environment you're developing in, I might know of some libraries to help with that
d0kefish
d0kefishOP16mo ago
Yes ( this might be a stupid solution) I’m using python/Django
DirtyJ
DirtyJ16mo ago
Here's some Python info from Snyk: https://go.snyk.io/rs/677-THP-415/images/Python_Cheatsheet_whitepaper.pdf Looks like they describe using bleach for sanitization in there: https://github.com/mozilla/bleach
d0kefish
d0kefishOP16mo ago
I’ll dig into this, thank you One more angle on this. Is an xss attack a risk for the host or is it more towards the user seeing it?
DirtyJ
DirtyJ16mo ago
XSS is primarily executed on a site's users as they interact with the application. However, XSS can be combined with other techniques for privilege escalation, and in environments with enough actionable vulnerabilities, a threat actor could potentially escalate privileges and move around until they get where they want to be
d0kefish
d0kefishOP16mo ago
The site works as the user requests an answer basically and I’m thinking if the trick ChatGPT to somehow send back something malicious it would only be displayed to themselves But removing it limiting it as far as possible is probably the better way

Did you find this page helpful?