IT/Software career thread: Invert binary trees for dollars.

Khane

Got something right about marriage
21,555
15,452
We use Snyk, it isn't preventing AI from doing much of anything. It's utilized when we create pull requests to check for security flaws in the code being pushed but that's it. Then the code gets pushed to our repositories which github copilot has full access to.

It's a lot like using the internet. You can try all you want to remain anonymous and keep your information private but the only real way to ensure that's happening is to just not use it.
 

TJT

Mr. Poopybutthole
<Gold Donor>
46,549
127,178
If you use Github or anything other than a repo hosted on your own metal what prevents Microsoft from just cloning the backend and stealing your code?

Oh right, nothing. Extraordinarily hard to prove that too.
 

Haus

I am Big Balls!
<Gold Donor>
18,774
77,501
No need to be vague, just mention Snyk or Wiz - we don't need to make this sound mysterious.
Snyk and Wiz are predominantly CPSM and Code verification, they won't do a ton to regulate how much you do or don't abuse AI in the process and if it's going bad. They're both good at what they do, but they're just now pivoting towards the AI reality. (Which to be honest is true of just about all cybersecurity vendors to some degree)

For AI and Agentic protections it's more of an emerging market. Different tools. Also, it gets into a relative to CPSM which is Cloud Content Monitoring where we can watch what AI models you have deployed in any various locations (including external via APIs) and help regulate them. (i.e. you can use our corporate Claude or Gemini models, but no shuffling nonsense out to your private ChatGPT)

One of the hurdles is that people still want HITL (Human in the loop) which is a good thing, but depending on their setups the code being generated via AI is sometimes not as human legible as they would like. So having a system which can dissect it for potential vulnerabilities is good.