Best Practices
Best Practices
In this document, we bring together the most important recommendations, guidelines, and expert tips for leveraging the Learning Journey Data Model (LDM) to its fullest potential. Whether you’re a System Administrator, Instructional Designer, or Instructor, these practices will help ensure your educational structures remain consistent, scalable, and aligned with your organizational goals.
1. ID Structure and Reference Best Practices
Use Consistent, Domain-Based Naming
Domain Approach: We encourage using a structured URI-style naming convention for your
ReferenceId
fields, such as:
http://yourdomain.com/CURR01/ http://yourdomain.com/CURR01/PATH02/ http://yourdomain.com/CURR01/PATH02/COURSE5/
This keeps your objects uniquely identifiable across systems.
Pivot-Friendly “ReferenceId”
Editable ReferenceId: In the LDM, the
ReferenceId
can default to theActivityDefinitionId
but remains changeable after creation.Why This Helps: If you rename or reorganize your hierarchy, you don’t lose historical data and can “pivot” your references.
Activity Definition Logic: If you’re pulling data via xAPI or other ingestion streams, prefer storing the object’s ID extension or context extension in
ReferenceId
. This ensures consistent alignment with external systems.
Wildcards and Transformations
If your environment supports transformations (e.g., a custom mapping table that rewrites IDs), define rules with wildcards for domain paths or versioning. For instance: https://mydomain.com/course1/* → http://globebyte-ldm.com/CURR/mycourse1
This allows you to harmonize IDs from different systems without a messy manual process.
2. Pedagogical Consistency
Pick a Primary Framework (or Blend Wisely)
Bloom’s Taxonomy: A standard for cognitive skill progression (Remember → Understand → Apply → Analyze → Evaluate → Create).
SOLO, Krathwohl’s Affective Domain, Marzano’s New Taxonomy: Each offers unique perspectives on how learners progress.
Consistent Verb Usage: If adopting Bloom’s, choose action verbs like “analyzed,” “evaluated,” or “created” for advanced levels. This consistency helps the AI or instructors interpret the depth of tasks.
Align Objectives and Assessments
Define Learning Objectives that match your chosen pedagogy levels. For instance, a Bloom’s Level 4 objective, “Analyze market trends,” should map to tasks that actually measure analytical thinking (e.g., a case study critique).
Rubric Integration: Each criterion in the rubric should reflect or link to the objective’s level (e.g., “Analysis & Depth” for a Bloom’s Level 4 or 5 objective).
Leverage “InstructionReference” Fields
If your Pedagogy or Objectives require unique teaching strategies (e.g., “Use group debates for L5 analysis tasks”), put those details in
InstructionReference
on the Objective Assignment. This ensures consistent, rich context for instructors and the AI across the platform.
3. Designing Rubrics for Transparency and Fairness
Provide Rubrics Before Assignments
Advance Disclosure: When learners know the exact criteria and performance levels, they produce higher-quality work and understand how they will be evaluated.
Consistent Scoring: A well-structured rubric (with “Excellent,” “Good,” “Fair,” “Poor” or a numeric scale) ensures all graders use the same standard.
Use Weighted Criteria When Appropriate
Some courses emphasize certain skills more heavily. If you’re grading a research paper, “Evidence & Support” might carry more weight than “Grammar & Style.”
Implementation: In your Rubric or Rubric Criterion, set a
Weight
for the criterion. Summations in the platform or AI can factor these weights automatically.
Combine Rubrics with AI
AI Scoring: If you have “AI Aggregated Marking,” link a Rubric to your tasks. The AI can reference each criterion, propose scores, and provide feedback.
Model Solutions: Provide an ideal answer for complex tasks (e.g., essays). This helps the AI or instructors see a perfect example, ensuring more consistent grading across large cohorts.
4. AI Readiness
Enable AI at Course or Module Level
Checkbox: For the LDM, you set
AIEnabled
on a Course or Module.AIStatus & AIGeneratedDate: The system updates these fields to tell you if AI insights or auto-grading are fully up to date.
Leverage “AssessingType” in Assessments
Disable AI: If you prefer manual-only grading.
AI Aggregated Marking and Approval: The AI calculates final marks directly and sets them to “Approved.”
AI Aggregated Marking for Manual Approval: The AI suggests scores; an instructor finalizes them.
Provide Rich Instructional Hints
InstructionReference Fields: The more context the AI has, the better feedback and scoring it can produce (e.g., “Focus on connecting real-world cases in the essay”).
Use Rubrics: AI performs best if each criterion is clearly defined, with actionable scoring descriptors.
5. xAPI Alignment
Distinguish “Object” vs. “Context” Extensions
Object Extensions: If your xAPI statement includes
"extensions": {"http://globebyte-ldm.com/referenceid": "..."},"
in the object definition, the LDM can directly read it as theReferenceId
.Context Extensions: If also provided, the platform will prioritize the object extension but can fall back on the context extension if needed.
Provide Hierarchical “Parent” and “Grouping” Activities
Parent: Indicates the immediate container (e.g., “Module #2”).
Grouping: If activities belong to a broader theme or program (like “Cohort: 2024”), store that in the grouping context. This clarifies for the system how the activity fits into the bigger picture.
Use Bloom’s-Aligned Verbs
e.g., “accessed,” “analyzed,” “created.”
Consistent Verb Displays help the AI interpret the learner’s cognitive engagement, especially if you want advanced analytics on skill progression.
6. Resource Management Practices
Keep Resource Types Organized
Define your Learning Resource Type set (e.g., “Lecture Notes,” “Video,” “Lab Manual,” “Model Solution”). This classification helps learners and instructors locate the right materials more quickly.
Decide on Hosting Strategy Early
Salesforce Files: Centralized and secure.
Public URL: Rapid linking but can cause 404 errors if URLs change.
AWS S3: Scalable for large or numerous files.
Use “Pages” Field for Large Documents
If your resource is a 100-page PDF and only pages 10–15 matter for a unit, note that in the
Pages
field (e.g., “10-15”). Learners get clearer instructions, and you reduce confusion.
7. Incremental (Phased) Deployment
Start with One Curriculum or Pathway
Prototype your approach: Create a single Curriculum, one Pathway, and a handful of Modules or Courses. Test how rubrics, objectives, and AI features integrate before scaling up.
Gather Feedback
Encourage instructors, TAs, or pilot learners to provide feedback on clarity, ease of navigation, and scoring. Adjust your rubrics, resource links, or activity references as needed.
Scale in a Controlled Manner
Once the pilot is stable, replicate your structure or expand it to other departments. Because the LDM is flexible, you can gradually add more objectives, rubrics, or entire Curricula without risking chaos.
8. Security & Privilege Management
Field-Level Security
Many LDM fields handle sensitive data (scores, feedback, AI statuses).
Best Practice:
Instructors see attempts, scores, and basic learner info.
Learners see their own records but not others’.
Instructional Designers typically see design objects (Curricula, Rubrics, Objectives) but not detailed learner attempts.
Admins have full CRUD.
Privilege Sets & Roles
System Administrator: Manage user accounts, system settings, and integrations with full CRUD access.
Instructional Designer: Create and edit Curricula, Pathways, Courses, Modules, Rubrics, and Resource Types.
Instructor: Oversee the classes or modules they teach; can create or update tasks, grade attempts, but with limited ability to modify top-level structural objects.
Learner: Submit attempts, view personal marks, and see relevant resources only.
Consider Automations for Membership
If your organization frequently changes group rosters, create an automated process (e.g., a Flow or custom integration) to update Learner Group Membership records, maintaining accurate JoinDate, LeaveDate, and status.
9. Grouping & Collaboration
Maintain Clear Group Definitions
Learning Groups can represent classes, cohorts, or ad-hoc project teams. Provide a descriptive
Name
andType
(e.g., “General” vs. “Study Cohort”).If you attach a “SlackReferenceId,” ensure it’s valid and updated if channels move or rename.
Track Group Membership Over Time
For historical or compliance reasons, keep accurate JoinDate and LeaveDate. This helps you see how long a learner actively participated in a group.
Leverage “Active” or “Inactive” statuses to reflect real-time membership.
Use Groups for Peer Learning
Encourage peer reviews, group projects, or shared rubrics to evaluate group outputs. This fosters social learning and synergy within your LDM.
10. Assessment Execution & Design
Plan Submission Windows Wisely
SubmissionOpenDate plus SubmissionPeriod can manage fairness. If you want a weekend exam, set the open date on Saturday at 9:00 AM and a 48-hour period for completion.
Set “MinimumMarkRequired” Thoughtfully
A numeric pass threshold helps instructors and learners see the difference between “meeting expectations” vs. “exceeding” them.
Combine with Rubrics to clarify how partial or advanced performance is recognized.
Align Assessment Tasks to Objectives
If your objective states “Evaluate marketing campaign ROI,” include Text Tasks or Choice Tasks that truly measure evaluation skills.
Rubrics with “Analysis & Depth” or “Evidence & Support” criteria help tie assessment tasks to your stated goals.
Provide Multi-Tiered Feedback
Scores: The raw numeric result.
ScoreReason: The logic behind awarding that score.
ScoreInstruction or MarkInstruction: General improvement suggestions.
ScorePersonalInstruction or MarkPersonalInstruction: Personalized advice if the learner struggles with specific concepts.
11. Continuous Learner Engagement
Offer Quick Wins
Formative Assessments with immediate feedback keep learners motivated.
Smaller rubrics for short tasks, simple multiple-choice questions for periodic confidence checks.
Nudges & At-Risk Segmentation
If you track progress in Learner Activity or mark them as “At Risk,” integrate triggers: e.g., a Slack message to the group or an automated email with extra resources.
Encourage Reflection
For text attempts, the AttemptReflection field invites learners to articulate their thought process. This fosters metacognitive skills and deeper learning retention.
12. Advanced xAPI and Data Analytics
Store Full Statements If Needed
Some organizations require in-depth auditing or compliance records. Use Learner XAPIStatement to keep entire JSON logs of interactions.
This is optional but valuable if you want to re-check or reinterpret data in the future.
Provide Rich Context
The more detail you supply in the xAPI statements (object definitions, context arrays, grouping references, etc.), the more advanced insights your analytics engines (or AI) can generate.
Bloom’s Aligned Verbs: e.g., “accessed,” “evaluated,” “created” to parse cognitive levels of engagement.
Over-Time Trends
Compare repeated attempts or tasks to measure skill progression. If multiple tasks reference the same objective, the system (or your analytics) can chart improvement from “Fair” to “Good” or “Excellent.”
Conclusion
The Learning Journey Data Model is a robust framework that, when combined with these best practices, can truly transform your learning environment. By:
Structuring your data (Curricula, Pathways, Courses, Modules) with clear IDs and consistent references,
Aligning tasks and rubrics with recognized pedagogical frameworks,
Enabling AI effectively for advanced scoring and feedback, and
Leveraging xAPI-based event data for in-depth insights,
you will create a seamless experience that fosters high-quality learning outcomes, instructor efficiency, and data-driven decision-making.
Continue refining these best practices to suit your unique context—each organization and learning scenario can benefit from a tailored approach. Ultimately, by following these guidelines and remaining intentional in your design, you’re well on your way to building an innovative, learner-centric ecosystem.
Remember: Regularly revisit and update your best practices as your institution grows, new technologies arise, and your learners’ needs evolve. The LDM is flexible—use it to continuously improve the learning journey for all stakeholders.
Last updated