LG CNS has launched the ‘Purple Lab,’ a virtual organization comprising twenty security experts to bolster cybersecurity. The Purple Lab operates an offensive team of ethical hackers, known as white hackers, divided into the Red Team (a defensive team) and the Blue Team (running a 24/7 smart security control center). The name ‘Purple Lab’ symbolizes the color blend of red and blue working together.
These teams build and manage servers and systems in a cloud environment, engaging in simulated cyber confrontations. The Red Team initiates unannounced attacks, while the Blue Team defends, analyzes the entry points, and develops countermeasures.
Previously, red teams operated independently, conducting unscheduled hacking attempts to test security. By merging with the blue teams, organizations can jointly study hacking methods and implement countermeasures in real-world scenarios.
The creation of these integrated security teams comes in response to a surge in cyberattacks driven by digital transformation and advancements like AI. These technologies present new, untested security vulnerabilities, making robust organizational security increasingly crucial.
Hyundai AutoEver, the IT subsidiary of Hyundai Motor Group, formed its red team last year to provide penetration testing for itself and its affiliates. The team simulates real hacker attacks to evaluate and improve security measures. Similarly, SK Shieldus operates EQST, Korea’s largest group of over 120 white hackers, conducting simulated hacks to test and enhance security for various industries, including finance, telecommunications, and manufacturing.
The Korea Internet & Security Agency has reported a significant increase in hacking victims, with protective measures taken against hacked companies rising from 4,063 in 2021 to 9,617 in 2023.
Big tech companies use red teams to enhance security, especially as they roll out generative AI technologies. These teams not only defend against potential AI abuses but also identify and rectify errors and biases in AI-generated responses.
NAVER, for instance, employs its Red Team to ensure the safety of its HyperCLOVA X. “We use harmful topics and attack strategies to test our AI models and improve the accuracy, bias, and safety of the information generated,” a NAVER official said.
The role of red teams has grown due to the complex nature of AI. “Generative AI is challenging because it can produce different answers to the same question each time,” said a tech industry insider. “This technical instability makes red teams vital for helping senior executives like CEOs and CTOs make informed decisions.”