2026 Rubric

Judging Format

This year's evaluation format will continue as a science fair. Towards the end of the evening, all teams will stop working on their projects and shift to presenting their projects to the audience. Judges, parents, and students are encouraged to visit each project to see what other teams have built. During this period, judges will evaluate each team's work against the rubric.

The maximum number of points a single judge can award to a single team is 11. Maximum penalty is -2. Each team's score will be an average across all of the judges. Judges who have a child on a particular team will be excused from evaluating that team.

The criteria are intended to be applied equally to all teams - team backgrounds, reputation, and skill levels do not influence scoring.

The top 3 teams will be invited to present for the full audience - and then prizes will be announced.

Scoring Summary

Execution
0-4 points
Level of Difficulty
0-4 points
AI Proficiency
0 to -2 points
Ambition
0 to +2 points
Presentation
0 to +1 points

Judging Rubric

Execution (0-4 points)

4
Flawless Execution - The project is fully functional, stable, and free of major bugs. Performance is smooth, and all intended features work as expected. The team has handled edge cases and ensured robustness.
3
Solid Execution - The project is mostly functional, with minor bugs or missing features. Performance is acceptable, but some refinements could improve usability. Some edge cases may not be handled.
2
Partial Execution - The project works in some cases but has noticeable bugs, crashes, or incomplete features. Some core functionality may be missing, but the concept is demonstrated.
1
Minimal Execution - The project barely works or is incomplete. Most features are non-functional, and the team struggled to implement their ideas.
0
Non-functional - The project does not run or function at all.

Level of Difficulty (0-4 points)

4
Very Challenging - Project tackles problems where AI assistants provide limited help. Involves novel algorithms, complex system integrations, cutting-edge APIs, or domain-specific logic that requires genuine problem-solving beyond what tools like Copilot or ChatGPT can readily generate.
3
Challenging - Project requires significant problem-solving beyond AI assistance. Students had to develop custom solutions, integrate multiple complex systems, or work with technologies that AI tools handle poorly. Clear evidence of debugging and adaptation.
2
Moderate Difficulty - Student had to significantly modify generated code, troubleshoot non-obvious issues, or combine multiple AI-assisted components in ways that demanded real comprehension of the underlying systems.
1
Basic Difficulty - Project required some modification of AI-generated code but follows common patterns. Students made minor adaptations and showed some understanding of the code.
0
Normal Difficulty - Project could be largely assembled using AI-generated code with minimal modification. Standard CRUD apps, basic templates, or well-documented tutorials that AI tools handle effectively. Limited evidence of problem-solving.

AI Proficiency (0 to -2 points)

0
Expert AI Collaboration - The team demonstrates deep understanding of how they prompted AI tools and can clearly explain their reasoning. They critically reviewed and understand all generated code, can identify what the AI did well and where they made modifications, and show genuine comprehension of how their project works.
-1
Strong AI Understanding - The team shows solid understanding of their prompting approach and can explain most of the generated code. They reviewed the output thoughtfully and can describe the key components, though some areas may be less fully understood.
-2
Partial AI Understanding - The team has basic awareness of how they used AI and can explain some portions of the code. Review of generated code was superficial, and they struggle to explain significant parts of how their project works.

Ambition (0 to +2 points)

+2
Global Impact - The project directly addresses one of humanity's grand challenges. It aims to solve problems affecting millions or billions of people: health, water, food, energy, education, poverty, or environmental sustainability. The team is thinking at a scale that could meaningfully change the world.
+1
Meaningful Impact - The project addresses a real societal problem that affects a specific community or population. It may not be global in scope, but it demonstrates awareness of human needs and a genuine desire to make people's lives better.
0
Limited Impact - The project solves a personal convenience problem, is purely entertainment-focused, or does not demonstrate intent to address broader human needs.

Presentation (0 to +1 points)

+1
Clear & Engaging - The team clearly explains what they built, why it matters, and demonstrates it working.
0
Unclear or Incomplete - The presentation is confusing, overly brief, or fails to convey what the project actually does or why.

About the AI Rubric

AI has become a powerful tool and we encourage interested teams to use AI to build their projects. However, we want to also encourage teams to understand what the AI is doing. We also want to create a level rubric for our No AI Teams - thus, the AI rubric is built as possible negative points based on understanding.