Cookies & Privacy

We use essential cookies to make our site work. With your permission, we’ll also use analytics and marketing cookies to improve your experience. You can change your choice anytime.

See our Privacy Policy for details.

Manage preferences
Cookie preferences
Back to Industry News
General

AI Safety Startup Irregular Raises $80M, Valued at $450M

Summary generated with AI, editor-reviewed
Heartspace News Desk
Source: Forbes, Forbes

Key takeaways

  • Irregular, a San Francisco-based AI safety startup previously known as Pattern Labs, has secured $80 million in seed and Series A funding, achieving a valuation of $450 million, according to Forbes
  • The investment round was led by venture capital firm Sequoia Capital
  • Irregular focuses on testing AI models for their potential for harm, collaborating with leading industry players such as OpenAI, Anthropic, and Google DeepMind
Irregular, a San Francisco-based AI safety startup previously known as Pattern Labs, has secured $80 million in seed and Series A funding, achieving a valuation of $450 million, according to Forbes. The investment round was led by venture capital firm Sequoia Capital. Irregular focuses on testing AI models for their potential for harm, collaborating with leading industry players such as OpenAI, Anthropic, and Google DeepMind. These clients leverage Irregular's expertise to identify and address vulnerabilities within their AI systems prior to public release. The escalating demand for Irregular's services is further underscored by recent industry concerns. OpenAI co-founder Sam Altman has cautioned about an impending "fraud crisis" driven by AI's capacity for impersonation. Anthropic recently revealed that its AI model, Claude, was involved in cyberattacks by assisting with malware coding and the creation of phishing emails. Additionally, the FBI has issued warnings regarding AI-generated voice messages used in phishing schemes to impersonate senior government officials. Irregular employs a "red teaming" methodology, which involves simulating environments to observe AI responses to malicious prompts. For instance, an AI might be tasked with extracting sensitive data from a mock IT network. The startup has achieved profitability, generating "several million dollars" in revenue within its first year. CEO and co-founder Dan Lahav highlighted the scarcity of expertise in AI stress-testing, emphasizing that the complexity of this challenge is growing as AI models become more advanced. Irregular intends to prioritize the development of mitigations for these evolving risks.

Related Topics

AI SafetyIrregularSequoia CapitalFundingRed TeamingAI VulnerabilitiesCybersecurity

Share Your Thoughts

(0 comments)

Be the first to share your thoughts on this article!

Stay Updated

Create alertsRead original