Globebyte Documentation
  • AI Agents for Learning
  • Assess for Learning
    • Creating the Assess Connected App
    • Setting up Assess for Learning
    • Viewing Assessments
    • Assessment Outcomes & Validation
    • Marking
    • Best Practices
  • Tutor for Learning
    • Setting up Tutor
    • Agent Actions
      • Tutor_Mark
      • Tutor_Task
      • Tutor_Criterion
      • Tutor_SenseMaking
      • Tutor_Instruction
    • Topics
      • Tutor_Assessment
      • Tutor_Knowledge
  • Data for Learning
  • Actions for Learning
    • Creating the xAPI Actions Connected App
    • Setting up xAPI Actions
    • Creating your first xAPI Action Flow
    • xAPI Statement Data explorer
      • Metadata
      • xapiActor
      • xapiVerb
        • Verb reference
      • xapiObject
      • authority
      • xapiResult
      • xapiContext
    • Filtering xAPI Statements
    • Viewing xAPI Statements
    • Viewing xAPI Usage
    • Setting a default statement language
    • Error messages and troubleshooting
  • Experience for Learning
    • Setting up xAPI for Salesforce
    • Send xAPI from a Flow
    • Form Action fields
    • Send xAPI from Apex
    • xApiStatement Class reference
      • Actor
      • Verb
      • Object
      • Context
      • Result
      • Authority
      • Version
      • Send methods
    • Logging and defaults
  • Learning Journey Model
    • Introduction
    • Curriculums & Pathways
    • Courses & Modules
    • Pedagogies & Objectives
    • Rubrics & Criteria
    • Learning Resources
    • Assessments & Tasks
    • Learning Groups
    • Step-by-step working example
    • Activity Tracking (Advanced)
    • Additional Pedagogies Reference
    • Best Practices
    • Assess for Learning Integration
    • Data for Learning Integration
    • Object References
      • Learning Curriculum
      • Learning Pathway
      • Learning Course
      • Learning Module
      • Learning Pedagogy
      • Learning Objective
      • Learning Objective Assignment
      • Learning Rubric
      • Learning Rubric Criterion
      • Learning Rubric Model Solution
      • Learning Resource Type
      • Learning Resource
      • Learning Assessment
      • Learning Text Task
      • Learner Text Attempt
      • Learner Text Criterion Score
      • Learning Choice Task
      • Learner Choice Attempt
      • Learner Mark
      • Learning Group
      • Learner Group Membership
      • Learner Activity
      • Learner Activity Instance
      • Learner XAPIStatement
      • Developer Cheat Sheet: Key LDM Objects
  • Globebyte.com
Powered by GitBook
On this page
  1. Assess for Learning

Best Practices

PreviousMarkingNextTutor for Learning

Last updated 4 months ago

Best Practices for Assess for Learning

Pro Tip: For best results, define detailed rubrics, model solutions, objectives, and aligned pedagogy within the . This maximizes the AI’s ability to provide targeted suggestions and mark with consistent reliability.

  1. Comprehensive Rubric Design

    • The AI can only mark effectively if your rubrics are thorough. Provide clear performance levels, detailed descriptions, and weights where applicable.

  2. Model Solutions for Complex Tasks

    • For essays or open-ended tasks, adding a Model Solution gives the AI a reference point to compare the learner’s work against an ideal.

  3. Objective Alignment

    • Ensure each assessment or module references the Objectives it aims to measure. This helps the AI provide more context-based feedback (e.g., “Level 4: Analyzing,” “Level 6: Creating”).

  4. Leverage Pedagogies

    • If you adopt Bloom’s or SOLO, the AI references those frameworks to classify and respond to learner queries at the right cognitive level.

  5. Keep LDM Objects Updated

    • Regularly revise Modules, Rubrics, and Objectives to reflect evolving course content. The AI relies on accurate data to provide correct insights.

  6. Encourage Learner Conversations

    • Assess for Learning excels when learners actually engage in two-way conversations about their performance. Remind them to ask follow-up questions and explore reasons behind the AI’s marking.

  7. Monitor & Validate

    • While the AI can automate much of the process, instructors should spot-check results—especially for borderline or high-stakes scenarios. Over time, the AI “learns” from consistent data and refined rubrics, improving accuracy.

😎 Enjoy!

Explore . Built for Agentforce, Tutor is a powerful extension to our Assess for Learning agent, designed to enhance the assessment experience by providing the learner with an AI interface into their assessment data.

Learning Journey Data Model (LDM)
Tutor for Learning