Close Menu
  • Instructions
  • News
    • DeFi
    • Smart Contract
    • Markets
    • Web3
    • Adoption
    • Memecoins
    • Analysis
    • Mining
    • Scams
    • Security
  • Education
    • Learn
    • Wallets & Exchange
  • Documentaries
  • Videos
    • Alessio Rastani
    • Altcoin Buzz
    • Coin Bureau
    • Dapp University
    • DataDash
    • Digital asset News
    • EllioTrades Crypto
    • MMCrypto
    • Lark Davis
    • Ivan on Tech
    • Benjamin Cowen
  • Market
    • Crypto Market Cap
    • Heat Map
    • Converter
    • Metal Prices
    • Stock prices
  • Bonus Books
  • Tools
What's Hot

Meteora reports $1.5 million OTC scam loss in Q1 MET report

May 2, 2026

Brazil's central bank bans stablecoin and crypto settlement in cross-border payments

May 2, 2026

Maple Finance’s SYRUP Token Now Available on Revolut in UK and EU

May 2, 2026
Facebook X (Twitter) Instagram
Recession Profit AlertsRecession Profit Alerts
  • Instructions
  • News
    • DeFi
    • Smart Contract
    • Markets
    • Web3
    • Adoption
    • Memecoins
    • Analysis
    • Mining
    • Scams
    • Security
  • Education
    • Learn
    • Wallets & Exchange
  • Documentaries
  • Videos
    • Alessio Rastani
    • Altcoin Buzz
    • Coin Bureau
    • Dapp University
    • DataDash
    • Digital asset News
    • EllioTrades Crypto
    • MMCrypto
    • Lark Davis
    • Ivan on Tech
    • Benjamin Cowen
  • Market
    • Crypto Market Cap
    • Heat Map
    • Converter
    • Metal Prices
    • Stock prices
  • Bonus Books
  • Tools
Recession Profit AlertsRecession Profit Alerts
Home»Security»With Its Security Under Scrutiny, OpenAI Is Recruiting a Cybersecurity ‘Red Team’
Security

With Its Security Under Scrutiny, OpenAI Is Recruiting a Cybersecurity ‘Red Team’

September 22, 2023No Comments3 Mins Read

Looking to bolster the security of its popular AI chatbot, OpenAI is turning to outside cybersecurity and penetration experts, also known as “red teams,” to find holes in the AI platform.

The company says it is looking for experts across various fields, including cognitive and computer science, economics, healthcare, and cybersecurity. The aim, OpenAI says, is to improve the safety and ethics of AI models.

The open invitation comes as the US Federal Trade Commission launches an investigation into OpenAI’s data collection and security practices, and policymakers and corporations are questioning how safe using ChatGPT is.

“[It’s] crowdsourcing volunteers to jump in and do fun security stuff,” Halborn Co-founder & CISO Steven Walbroehl told Decrypt. “It’s a networking opportunity, and a chance to be [on] the frontline of tech.”

“Hackers—the best ones—like to hack the newest emerging tech,” Walbroehl added.

To sweeten the deal, OpenAI says red team members will be compensated, and no prior experience with AI is necessary—only a willingness to contribute diverse perspectives.

“We’re announcing an open call for the OpenAI Red Teaming Network and invite domain experts interested in improving the safety of OpenAI’s models to join our efforts,” OpenAI wrote. “We are looking for experts from various fields to collaborate with us in rigorously evaluating and red-teaming our AI models.”

Red teams refer to cybersecurity professionals who are experts at attacking—also known as penetration testing or pen-testing—systems and exposing vulnerabilities. In contrast, blue teams describe cybersecurity professionals who defend systems against attacks.

“Beyond joining the network, there are other collaborative opportunities to contribute to AI safety,” OpenAI continued. “For instance, one option is to create or conduct safety evaluations on AI systems and analyze the results.”

See also  Veea Platform Solution to Address Cybersecurity Challenges Nationwide for Businesses in Mexico

Launched in 2015, OpenAI entered the public eye late last year with the public launch of ChatGPT and the more advanced GPT-4 in March, taking the tech world by storm and ushering generative AI into the mainstream.

In July, OpenAI joined Google, Microsoft, and others in pledging to commit to developing safe and secure AI tools.

While generative AI tools like ChatGPT have revolutionized how people create content and consume information, AI chatbots have not been without controversy, drawing claims of bias, racism, lying (hallucinating), and lacking transparency regarding how and where user data is stored.

Concerns over user privacy led several countries, including Italy, Russia, China, North Korea, Cuba, Iran, and Syria, to implement bans on using ChatGPT within their borders. In response, OpenAI updated ChatGPT to include a delete chat history function to boost user privacy.

The Red Team program is the latest play by OpenAI to attract top security professionals to help evaluate its technology. In June, OpenAI pledged $1 million towards cybersecurity measures and initiatives that use artificial intelligence.

While the company said researchers are not restricted from publishing their findings or pursuing other opportunities, OpenAI noted that members of the program should be aware that involvement in red teaming and other projects are often subject to Non-Disclosure Agreements (NDAs) or “must remain confidential for an indefinite period.”

“We encourage creativity and experimentation in evaluating AI systems,” OpenAI concluded. “Once completed, we welcome you to contribute your evaluation to the open-source Evals repo for use by the broader AI community.”

See also  Turkish User’s Assets on Cryptocurrency Exchange Seized: Here’s the Legal Background

OpenAI did not immediately return Decrypt’s request for comment.

Source link

cybersecurity OpenAI Recruiting Red Scrutiny Security team

Related Posts

Meteora reports $1.5 million OTC scam loss in Q1 MET report

May 2, 2026

Crypto hack losses top $630M in April, highest since February 2025

May 2, 2026

US seized $500M in Iranian crypto assets, Treasury secretary says

May 2, 2026

Wasabi Protocol drained for $4.5 million in apparent admin key compromise

May 2, 2026
Top Posts

Amazon Data Centers “Disrupted” Across Bahrain After Drone Activity

March 24, 2026

One Final Story About FTX for Old Times Sake

February 1, 2026

Is Narrative Warfare Driving Washington’s UN Pullback?

February 7, 2026

Type above and press Enter to search. Press Esc to cancel.