We are seeking a highly motivated and innovative Research Engineer/Research Scientist to join our Red Team focused on... Read more
We are seeking a highly motivated and innovative Research Engineer/Research Scientist to join our Red Team focused on Alignment. In this role, you will be responsible for conducting cutting-edge research to identify, assess, and mitigate risks associated with AI systems. You will work collaboratively with a diverse team of experts to develop methodologies and frameworks that ensure the safe and ethical deployment of AI technologies. Key responsibilities include designing and executing experiments, analyzing data to derive actionable insights, and presenting findings to both technical and non-technical stakeholders.
The ideal candidate will possess a strong background in machine learning, artificial intelligence, or a related field, with a proven track record of research excellence. You should have experience in risk assessment, adversarial testing, or security modeling as it pertains to AI systems. Proficiency in programming languages such as Python or C++, along with familiarity with tools for statistical analysis and data visualization, is essential. A Ph.D. or equivalent experience in a relevant discipline is preferred, along with excellent communication skills and the ability to work effectively in a team-oriented environment.
If you are passionate about advancing the field of AI safety and alignment, and thrive in a dynamic and collaborative atmosphere, we encourage you to apply. Join us in our mission to ensure that AI technologies are developed responsibly and align with societal values.
Read lessfor the following search criteria