Scoring & Evaluation
How CoderScout Evaluates Candidates
In CoderScout, evaluation is stage-driven and is designed primarily to support Assessment workflows, where candidates are screened and shortlisted progressively before interviews.
Each stage evaluates a candidate’s performance and can produce one or two scores, both expressed on a 0–100 scale:
- Score – based on objective rules or human evaluation
- AI Score – based on AI-driven qualitative assessment
These scores allow hiring teams to move candidates forward, place them on hold, or eliminate them using clear, evidence-based criteria, rather than intuition or early interviews.
Two Types of Scores in CoderScout
Score
The Score represents measurable or explicitly assigned performance for a stage.
Depending on the stage type, the score may be:
- automatically calculated by the system, or
- manually assigned by a human reviewer
Score is typically used when correctness, accuracy, or rule-based evaluation is the primary signal.
AI Score
The AI Score represents a qualitative evaluation of how the candidate approached the task.
It is generated using AI models that analyze:
- reasoning and problem-solving approach
- structure and clarity
- correctness beyond test cases
- depth of understanding
- maintainability and best practices (where applicable)
AI Score is especially valuable when:
- multiple valid solutions exist
- test cases alone are insufficient
- higher-order skills matter more than exact output
How Score Is Generated by Stage Type
MCQ Stage
- Score is calculated automatically
- Based on:
- correct vs incorrect answers
- question weightage
- optional negative marking
This produces a deterministic score out of 100.
Programming Stage
Programming stages can contain one or more programming challenges.
- Each challenge may use:
- Input / Output tests, or
- Unit Tests (language-specific)
Score is calculated as:
- percentage of test cases passed across all challenges
Example:
If a challenge has 4 test cases and 2 pass, the score is 50/100.
Application Stage
Application stages fall into three categories:
Web Basics (HTML, CSS, JavaScript)
- No automated unit testing support
- Score is manually assigned by a reviewer
Web Frameworks (React, Angular, Next.js, etc.)
- Test case options:
- None → manual score
- Unit Tests → automatic score
REST API Applications (Flask, Express, FastAPI, NestJS)
- Test case options:
- None → manual score
- REST API contract tests → automatic score
For automated tests, the score reflects how many defined behaviors are satisfied.
SQL Stage
- SQL queries are executed against expected datasets
- Score is generated automatically based on correctness of results
Notebook Stage
- No objective unit-test execution
- Score is manually assigned by a reviewer (out of 100)
AI Interview Stage
- Entirely evaluated by AI
- Produces:
- topic-level scores
- overall score out of 100
- structured summary
AI Score by Stage Type
In addition to the Score, many stages are also evaluated using AI Score.
AI Score is available for:
- Programming stages
- Application stages
- Notebook stages
- AI Interview stages
For example:
- Programming AI Score may assess readability, complexity, correctness, and maintainability
- Application AI Score may assess architecture, design decisions, and implementation quality
- Notebook AI Score may assess analytical reasoning, approach, and completeness
- AI Interview AI Score reflects conversational depth, clarity, and topic mastery
Using Score vs AI Score in Assessments
In Assessments, Auto-Progression can be enabled at the assessment level.
For each stage, you can configure:
- which score to use for decisions (Score or AI Score)
- the cut-off threshold
- what happens if a candidate falls below the threshold:
- Rejected
- On Hold
This allows hiring teams to:
- rely on objective correctness where appropriate
- use AI Score where qualitative judgment matters more
- mix both approaches across different stages
How Scores Are Used in Contests
In Contests, scoring serves a different purpose.
- All candidates complete a stage together
- Scores and AI Scores are calculated once the stage ends
- Hiring teams typically:
- review score distributions
- inspect AI summaries
- check integrity signals
- manually decide who advances
For this reason, auto-progression is usually disabled in contests.
Score Is Not the Only Signal
Scores are always accompanied by:
- full webcam, screen, and audio recordings
- synchronized playback
- integrity and cheating signals
- AI-generated summaries (where applicable)
This ensures decisions are not made on a number alone, but on complete evidence.
Key Takeaway
CoderScout’s scoring system combines objective measurement and AI-driven qualitative evaluation to support fair, consistent, and scalable hiring.
By using Score and AI Score appropriately across stages, Assessments can automatically surface the strongest candidates—while ensuring Hiring Managers and SMEs engage only when meaningful decisions need to be made.