Learner Text Attempt

Learner Text Attempt

Introduction

A Learner Text Attempt captures a student’s written submission in response to a Learning Text Task. This object tracks everything from the content of their submission to whether it’s being AI-scored or manually graded, plus any reflection the learner adds.

Key highlights:

  • Life Cycle Status: Pending → Submitted → Scoring → Scored → Expired, providing transparency into where each submission is in the grading process.

  • Scoring & Feedback: Instructors or AI can leave a numeric score, rationale, and personalized feedback.

  • Reflection: Learners can record their thought process or learning insights.

Properties

Property Name
Description

LearningTextTaskId

Identifies which open-ended question this attempt is answering (e.g., “Essay #1: Marketing Theories”).

ContactId

The learner who submitted this attempt. This link ensures the system knows who did the work.

Status

Reflects the attempt’s current state (Pending, Submitted, Scoring, Scored, or Expired). Allows staff to see if the learner’s attempt has been graded or if it’s still waiting for evaluation.

PendingDate

Shows when this attempt was initially created or set to pending.

SubmittedDate

When the learner finalized and submitted their text response.

ScoringDate

Marks when scoring (AI or manual) officially started.

ScoredDate

Timestamp for when the scoring was completed.

ExpiredDate

If the submission window passed and the learner hadn’t submitted, the system can mark it as Expired, with this date.

Attempt

The actual text of the learner’s essay or short-answer response. This can be quite long if the question is in-depth.

AttemptURL

(Optional) If the learner hosts their response externally (like a Google Doc), store the link here.

AttemptType

Describes what format the external file might be if AttemptURL is used (e.g., MS Word, Text).

AttemptHost

Indicates if the file is in Salesforce Files, a public URL, AWS S3, or elsewhere.

AttemptReflection

(Optional) The learner’s own reflection on how they approached the question. Encourages metacognition.

Score

A numeric grade. AI or instructors might auto-calculate it based on rubric criteria.

ScoreReason

Explains why that score was given, possibly referencing a rubric’s rules.

ScoreInstruction

General feedback or suggestions for improvement.

ScorePersonalInstruction

Tailored comments just for that learner (e.g., “Focus more on referencing real marketing data next time.”).

ScoreInstructionPlus1

Additional instructions that might encourage further practice or advanced learning if the system wants to provide an extra push.

ScorePersonalInstructionPlus1

More personalized, deeper-level coaching or tips. Great for higher-level learners or those who need very targeted next steps.

CreatedDate

Automatically recorded when the attempt is first submitted or created.

ModifiedDate

Automatically updated as feedback or scores are edited.

Example

A student named Alex completes “Essay on Consumer Behavior”. They paste their 500-word write-up in the Attempt field, reflect on how they used external data, and click submit. The attempt status becomes “Submitted,” the system logs the SubmittedDate, and AI kicks in, setting ScoringDate soon after.

Use Case

In a professional training course, employees must write a short reflection on “Applying Lean Principles in our Company.” Many attach or link to a more detailed Google Doc. The system transitions them from “Pending” to “Submitted.” The instructor then manually scores it, providing structured feedback. Over time, the organization sees how well staff internalize lean concepts, assisted by the attempt-level scoring data.

Last updated