- September 23, 2020 at 3:56 pm #310669
*I have written “possiblity/current emergence” in the heading as I am not myself aware if this kind of area is there or it is still an idea, that is why I posted here in order to learn more*
So with the emergence of Deep Learning, *data poisoning* was a clever hack performed by people to confuse the systems, this was more developed when adversarial agents came into picture.
So I wanted to know more regarding this, meaning that are there certain pentesters/hackers etc who deal with the safety of various ML models applied in practice, ensuring the model perform correctly, testing the overall of the whole production pipeline etc?
Are there some specific posts like that?
Are there some courses which align in this direction?
What is the potential growth of this field?
You must be logged in to reply to this topic.