Organizations interested in applying for Anthropic's grants can submit their applications to be evaluated on a rolling basis. Detailed information on the application process and requirements can be found on Anthropic's official blog and by following the provided links. The grants aim to fund the development of new types of benchmarks for evaluating AI model performance and impact.
Anthropic believes current AI benchmarks are inadequate because they don't capture how the average person interacts with the AI models being tested, and they often focus on narrow, technical measures rather than real-world applications and societal impacts4. Additionally, some benchmarks may not accurately measure what they claim to due to their age and the rapidly evolving nature of AI technology.
Anthropic aims to develop advanced AI models that are safe, reliable, and aligned with human values. They plan to achieve this by investing in safety-relevant evaluations and creating challenging benchmarks focused on AI security and societal implications. This will involve assessing AI models' abilities to carry out tasks like cyberattacks, enhancing weapons of mass destruction, and manipulating or deceiving people. They also hope to support research into AI's potential for aiding in scientific study, conversing in multiple languages, and mitigating ingrained biases.