Short note on security hyperautomation

Manoj Vignesh K M
3 min readMay 6, 2023

--

Photo by Ziad Al Halabi on Unsplash

Adversaries are using Artificial Intelligence based tools to perform highly sophisticated and targeted cyber attacks (Learn how adversaries may use AI to perform such cyber attacks: https://blog.kmmanoj.me/threat-of-offensive-ai-to-organizations-ed9533ca60b7). With AI in cybersecurity, common threat hunting procedures can be done at compute speed. And, with AI-based security tools security engineers would be able to manage and analyze humongous amount of data to identify threats (blue) and test complex systems to identify vulnerabilities (red). Vulnerability management and detection!

By mathematically modeling a system one would be able to identify arcane security violations. A software system could be represented as an state machine. The transitions of the automata define interaction with the system, while the states define unique states of the system. Hence, the hypothesis space of moving from State A to State B would be all possible sequence of interactions that leads the system from State A to State B. Nesting further, the hypothesis space for the system would be all possible state pairs, where for each pair the former defines the start state and the latter defines the end state. An AI-based algorithm is allowed to explore the hypothesis space of the system to identify one or more movements between states that is not defined (or) allowed in system rules. Given that the system rules provided to the algorithm is exhaustive (in theory) the identified movement between the states could be tagged as a bug or a security violation (or) vulnerability. Practically, the system rules can be provided to the algorithm in a hybrid manner i.e in the form of predefined rules (as in supervised learning) and on-demand rules (as in reinforcement learning). The challenge however exists in the ever changing behavior (code) of the system (consequently, the mathematical representation).

Motivation

Case study — Facebook AI research lab’s effectively negotiating bots

In 2017, Facebook AI research lab developed bots to understand how their customers would interact with Facebook in the future. The bots were left into the system to post, chat, react and interact with the system. One of the goals was to let the bots converse with each other and negotiate on a common decision. The research paper in reference explains the research in great detail.

Consider a similar setting, wherein the bots are left into the system. One category of bots defend (blue) against the other category of bots trying to violate (red) security (or, violate system rules). By the end of the training, one could expect the blue bots to monitor and detect potential intrusions and other malicious activities, and the red bots to effectively break the system rules thereby encouraging the developers to develop and deploy secure code to production.

Reference: Lewis, M., Yarats, D., Dauphin, Y., Parikh, D., & Batra, D.. (2017). Deal or No Deal? End-to-End Learning for Negotiation Dialogues. https://arxiv.org/abs/1706.05125

Case study — OpenAI’s Emergent Tool Use from Multi-Agent Interaction

In 2019, researchers of OpenAI worked on developing bots that play hide-and-seek in a simulated environment. The research starts with a simple setting where seekers (red) seek hiders (blue) by moving around in the environment. It compounds to a setting where agents could interact with objects by moving them around and locking them. As the research progressed towards advanced setting, an interesting phenomenon was observed. The bots began exploiting the bugs in the environment to achieve their goals (i.e. to hide and seek).

Consider a similar setting, wherein the blue bots are allowed to move around in different highly secured areas of the system, while the red bots begin from an open area (such as public internet) to seek the blue bots. Further, the blue bots would also be allowed to use certain programs such as monitoring and detection systems, clean up actions to kick the red bots out of the secured area, while the red bots try to dodge and compromise the defense system before entering the secured area to gain persistence.

Reference: Baker, B., Kanitscheider, I., Markov, T., Wu, Y., Powell, G., McGrew, B., & Mordatch, I.. (2019). Emergent Tool Use From Multi-Agent Autocurricula. https://arxiv.org/abs/1909.07528; https://openai.com/blog/emergent-tool-use/

--

--

Manoj Vignesh K M

Exploring the Science of Security | Georgia Tech MS CS | Security & Software Engineering