How to Automate Student Engagement Alerts in 48 Hours
The difference between an education provider with 67% completion rates and one with 92% completion rates is rarely content quality. It is intervention speed. According to Brandon Hall Group's 2025 Learning Engagement Benchmarking study, organizations that identify and respond to learner disengagement within 48 hours recover 73% of at-risk learners. Organizations that wait 7 or more days recover only 21%. For education providers managing 500 to 10,000 active learners, manual engagement monitoring is mathematically impossible: no human team can track daily activity patterns across thousands of learners simultaneously. This guide walks through every step of building an automated engagement alert system that catches disengagement within 48 hours and triggers intervention workflows before learners abandon their programs.
Key Takeaways
Catching disengagement within 48 hours recovers 73% of at-risk learners versus 21% recovery when intervention is delayed 7+ days, according to Brandon Hall Group's 2025 research
The 10-step implementation process takes 4-8 weeks from initial configuration through optimized production deployment
Five engagement signals combine into a composite risk score that predicts dropout with 84% accuracy when properly calibrated
Automated alerts reduce manual monitoring effort from 15-20 hours/week to 2-3 hours/week for exception review only
US Tech Automations connects engagement signals directly to multi-step intervention workflows that escalate automatically based on learner response
Prerequisites: What You Need Before Starting
Before building your engagement alert system, verify these components are in place:
| Prerequisite | Requirement | Why It Matters |
|---|---|---|
| LMS with activity tracking API | Moodle, Canvas, Docebo, TalentLMS, or similar with API access | Engagement signals must be machine-readable |
| Historical engagement data | Minimum 90 days of learner activity data | Required for calibrating risk score thresholds |
| Defined intervention protocols | Written procedures for at-risk learner outreach | Automation needs clear actions to trigger |
| Communication channels | Email + at least one additional channel (SMS, in-app, Slack) | Multi-channel outreach improves response rates |
| Staff assigned to intervention | At least 1 person responsible for at-risk learner response | Automated alerts need human follow-through for escalated cases |
| Learner consent for communications | Opt-in for engagement-related notifications | Required for compliance in many jurisdictions |
According to NCES 2025 data on online learning engagement, the prerequisite most commonly missing is written intervention protocols. Education providers often have informal processes ("the instructor reaches out when they notice someone falling behind") but no documented workflow with specific triggers, actions, and timelines. Formalizing these protocols is essential before automation.
What qualifies as a student engagement signal? According to ATD's 2025 Learning Analytics Framework, engagement signals fall into five categories: login frequency, content interaction (video plays, page views, resource downloads), assessment activity (quiz attempts, assignment submissions), social interaction (forum posts, peer comments), and progress velocity (pace of module completion relative to expected timeline). Each category provides independent evidence about a learner's engagement level.
Step-by-Step: Building Your Engagement Alert System
Step 1. Define Your Engagement Signal Framework
Identify the specific data points your LMS tracks that indicate engagement or disengagement.
Engagement signal inventory:
| Signal Category | Specific Metrics | Data Source | Collection Frequency |
|---|---|---|---|
| Login activity | Last login timestamp, login frequency, session duration | LMS auth logs | Real-time |
| Content interaction | Video watch %, page dwell time, resource downloads | LMS activity tracking | Per-event |
| Assessment activity | Quiz attempts, assignment submissions, grades | LMS gradebook | Per-event |
| Social interaction | Forum posts, replies, peer reviews submitted | LMS discussion module | Per-event |
| Progress velocity | Modules completed vs. expected pace | LMS progress tracking | Daily calculation |
According to Brandon Hall Group's 2025 research, organizations that track all five signal categories achieve 84% accuracy in predicting learner dropout, while organizations tracking only login activity achieve 52% accuracy. The predictive power comes from signal combination rather than any single metric.
Tracking all 5 engagement signal categories achieves 84% dropout prediction accuracy versus 52% for login-only tracking, according to Brandon Hall Group's 2025 Learning Analytics study
How do you determine which engagement signals are most predictive? According to NCES 2025 data, the strongest single predictor of course non-completion is progress velocity (pace of module completion). However, progress velocity alone misses learners who are accessing content but not comprehending it (indicated by low assessment scores) and learners who are progressing but disengaged socially (indicated by zero forum activity). The most effective systems weight signals based on your specific program type, which is calibrated during Step 3.
Step 2. Configure Your LMS to Emit Engagement Events
Your LMS must send structured data about learner activity to your automation platform.
LMS configuration by platform:
| LMS | Event Method | Supported Events | Configuration Complexity |
|---|---|---|---|
| Moodle | Webhooks + API | Login, completion, grade, forum | Moderate (plugin required) |
| Canvas | LTI + Data API | Login, submission, grade, pageview | Moderate |
| Docebo | Webhooks + API | Login, completion, enrollment, time | Low (native support) |
| TalentLMS | API polling | Login, completion, grade | Low |
| Teachable | API + webhooks | Completion, login (limited) | High (limited events) |
| Thinkific | Webhooks | Completion, enrollment | High (limited events) |
Recommended event configuration:
Configure login event tracking. Enable your LMS to send a webhook or API-accessible event each time a learner authenticates. Include learner ID, timestamp, and session duration when available.
Enable content interaction tracking. Configure your LMS to log video play events, page view events, and resource download events with learner ID and content ID.
Set up assessment event notifications. Configure webhooks for quiz submission, assignment submission, and grade posting events.
Enable forum activity tracking. If your programs include discussion components, configure event notifications for new posts and replies.
Configure progress milestone events. Set up notifications when learners complete modules or reach defined progress checkpoints.
Establish a data refresh schedule. For metrics that require calculation (like progress velocity), set up a daily batch job that computes current values for all active learners.
Test event delivery reliability. Send 100 test events and verify that all 100 arrive at your automation platform without loss or duplication.
Document event payload schemas. Record the exact data fields included in each event type for reference during workflow configuration.
According to ATD's 2025 Learning Technology Integration report, webhook-based event delivery is 95% more reliable than scheduled API polling for engagement tracking because webhooks deliver events as they occur, while polling can miss events that happen between poll intervals. For LMS platforms that support only API polling (like TalentLMS), set the polling interval to no more than 15 minutes.
Step 3. Build the Composite Risk Score Model
Individual engagement signals are noisy. A learner might miss a day of login because of a holiday without being disengaged. The composite risk score combines multiple signals to distinguish genuine disengagement from normal variation.
Risk score calculation framework:
| Signal | Weight | Low Risk (0 pts) | Medium Risk (1 pt) | High Risk (2 pts) |
|---|---|---|---|---|
| Days since last login | 25% | 0-1 days | 2-3 days | 4+ days |
| Progress velocity | 25% | On pace or ahead | 10-25% behind pace | 25%+ behind pace |
| Assessment activity | 20% | Submitted on time | 1 assessment overdue | 2+ assessments overdue |
| Content interaction | 15% | Active (daily interactions) | Declining (50%+ reduction) | Absent (no interactions in 3+ days) |
| Social participation | 15% | Regular posting/replying | Declining | No activity in 7+ days |
Risk score interpretation:
| Composite Score | Risk Level | Action |
|---|---|---|
| 0-2 | Low | No action (normal engagement) |
| 3-4 | Moderate | Automated engagement nudge |
| 5-6 | High | Advisor alert + escalated outreach |
| 7-10 | Critical | Immediate intervention + supervisor notification |
According to Brandon Hall Group's 2025 predictive analytics research, composite risk scores with 4+ signal categories and properly calibrated weights achieve a false positive rate below 8%. Single-signal alerts (like login-only monitoring) produce false positive rates of 25-40%, creating alert fatigue that causes staff to ignore legitimate warnings.
Composite risk scores produce false positive rates below 8% versus 25-40% for single-signal alerts, according to Brandon Hall Group's 2025 predictive analytics research
How do you calibrate engagement alert thresholds for your specific programs? The thresholds in the table above are starting points. According to ATD's 2025 guidance, organizations should calibrate thresholds using historical data: analyze the engagement patterns of learners who completed their programs versus those who dropped out, and set thresholds at the points where the patterns diverge. US Tech Automations provides threshold optimization tools that analyze your historical LMS data and recommend program-specific thresholds based on actual completion patterns.
Step 4. Configure Alert Triggers and Routing
Define which risk score levels trigger which responses and who receives them.
Alert routing matrix:
| Risk Level | Trigger Condition | Primary Alert Recipient | Secondary Alert | Alert Channel |
|---|---|---|---|---|
| Moderate (3-4) | Score reaches 3 for first time | Automated system (no human) | None | Learner receives nudge email |
| High (5-6) | Score reaches 5 or stays 3-4 for 72 hours | Student success advisor | Instructor | Email + Slack/Teams |
| Critical (7-10) | Score reaches 7 or stays 5-6 for 48 hours | Student success manager | Program director | Email + Slack + SMS |
| Persistent (any) | Score at moderate+ for 14+ consecutive days | Program director | Client admin (B2B) | Email + meeting request |
Alert content should include:
| Field | Purpose | Example |
|---|---|---|
| Learner name and ID | Identification | "Jane Smith (ID: 4582)" |
| Current risk score | Severity context | "Risk score: 7/10 (Critical)" |
| Risk score breakdown | Diagnostic detail | "Login: 2, Velocity: 2, Assessment: 2, Content: 1" |
| Days at current risk level | Duration context | "At High risk for 4 days" |
| Last activity summary | Recent behavior | "Last login: 3 days ago, last submission: 6 days ago" |
| Suggested intervention | Action guidance | "Schedule 1:1 call, extend assignment deadline" |
| Course progress % | Context | "42% complete (expected: 65%)" |
According to ATD's 2025 research on intervention effectiveness, alerts that include suggested interventions based on the specific risk factors (rather than generic "contact this learner" messages) result in 45% faster staff response times because the advisor does not need to investigate before acting.
Step 5. Build the Automated Intervention Workflow
The alert system alone only identifies at-risk learners. The intervention workflow takes action to re-engage them.
Intervention escalation workflow:
Score reaches Moderate (3-4): Automated nudge email. Send a personalized email reminding the learner of their progress, highlighting upcoming milestones, and providing a direct link to their next incomplete module. No human involvement required.
No engagement response within 48 hours: Second nudge via alternate channel. If the learner does not log in or interact with content within 48 hours of the first nudge, send an SMS (if opted in) or in-app notification with a different message emphasizing the value of completing the program.
Score reaches High (5-6): Advisor notification with context. Alert the assigned student success advisor with the complete engagement profile. The advisor receives pre-compiled data that would take 15-20 minutes to gather manually.
Advisor outreach attempt logged. The advisor contacts the learner (call, email, or message). The workflow tracks whether outreach was attempted and its outcome.
No advisor outreach within 24 hours: Manager escalation. If the advisor has not logged an outreach attempt within 24 hours of receiving the alert, the workflow escalates to the student success manager.
Score reaches Critical (7-10): Multi-party intervention. The program director, student success team, and (for B2B programs) the client administrator are all notified. A meeting request is auto-generated.
Learner re-engages: Monitoring continues. When the learner's risk score drops below 3, the intervention workflow pauses but monitoring continues. If the score rises again within 14 days, the workflow resumes at the next escalation level rather than starting over.
Learner withdraws: Record and report. If the learner formally withdraws or the withdrawal deadline passes with no re-engagement, the workflow logs the outcome for reporting and triggers any applicable refund or administrative processes.
| Intervention Step | Delivery Method | Response Window | Escalation Trigger |
|---|---|---|---|
| Nudge email | Automated email | 48 hours | No login or content interaction |
| Alternate channel nudge | SMS or in-app | 48 hours | No response to email |
| Advisor outreach | Human (phone/email) | 24 hours (advisor response) | Advisor does not log attempt |
| Manager escalation | Human + automated | 24 hours | No advisor action |
| Multi-party intervention | Meeting request | 48 hours | Score remains critical |
| Outcome logging | Automated | Ongoing | Program timeline expiration |
Automated engagement nudge emails recover 34% of moderate-risk learners without any human intervention according to ATD's 2025 Learner Retention Technology Report
According to ATD's 2025 Learner Retention Technology Report, the automated nudge step (Steps 1-2) resolves approximately 34% of moderate-risk cases without human involvement. This is the highest-ROI component of the engagement alert system because it requires zero staff time while recovering one-third of disengaging learners.
Step 6. Configure Learner-Facing Communications
The automated messages sent to learners must be motivational, not punitive. Learners who feel surveilled respond negatively.
Communication design principles:
| Principle | Good Example | Bad Example |
|---|---|---|
| Focus on progress, not absence | "You're 42% through Healthcare Compliance - your next module on HIPAA takes just 25 minutes" | "We noticed you haven't logged in for 3 days" |
| Provide a specific next action | "Pick up where you left off: Module 4, Lesson 3" | "Please log in and continue your course" |
| Include a direct link | Button: "Continue Module 4" (deep link to exact page) | "Visit our learning platform" |
| Acknowledge difficulty (if applicable) | "Module 3's assessment is challenging - here are study resources" | "Your quiz scores are below average" |
| Reference the end goal | "You're 3 modules away from your HIPAA certification" | "You need to complete your course" |
According to Credly's 2025 research on learner communication effectiveness, engagement nudge emails with deep links to the learner's exact last-viewed content page achieve 2.4x higher click-through rates than emails linking to the course homepage. The effort reduction of not having to navigate to the right spot makes re-engagement significantly easier.
How should engagement alerts be worded to avoid feeling intrusive? According to NCES 2025 research on learner communication preferences, 82% of online learners appreciate proactive outreach about their progress, but only when the message is framed as support rather than surveillance. The key framing difference is between "we're tracking you" (surveillance) and "we want to help you succeed" (support). Messages should never mention risk scores, engagement metrics, or inactivity durations. They should reference progress toward the learner's goal and provide a clear, easy next step.
Step 7. Implement the Data Pipeline
The engagement alert system requires a data pipeline that aggregates events from your LMS, calculates risk scores, and delivers them to your automation platform.
Data pipeline architecture:
Event ingestion. LMS webhooks deliver individual events (logins, submissions, page views) to US Tech Automations in real-time.
Event storage. Events are stored in a time-series format for historical analysis and trend calculation.
Daily score calculation. A scheduled workflow runs daily (recommended: 6 AM local time) to calculate composite risk scores for all active learners.
Score change detection. The system compares today's scores to yesterday's scores and identifies learners whose risk level has changed.
Alert generation. Risk level changes trigger the appropriate alert and intervention workflow step.
Feedback loop. Intervention outcomes (learner re-engaged, learner withdrew, no response) feed back into the scoring model for ongoing calibration.
| Pipeline Component | Processing Frequency | Data Volume (5,000 learners) | Latency Requirement |
|---|---|---|---|
| Event ingestion | Real-time | 10,000-50,000 events/day | < 5 seconds |
| Event storage | Continuous | 300,000-1,500,000 events/month | < 1 minute |
| Score calculation | Daily batch | 5,000 scores/run | < 15 minutes |
| Alert generation | On score change | 100-500 alerts/day | < 1 minute |
| Intervention trigger | On alert | 50-200 interventions/day | < 5 minutes |
According to Docebo's 2025 Learning Analytics Architecture report, daily score calculation is sufficient for engagement monitoring. Real-time scoring (recalculating on every event) is computationally expensive and does not meaningfully improve outcomes because intervention response times are measured in hours, not minutes.
Step 8. Test with Historical Data Before Going Live
Before activating alerts that reach learners and staff, validate your risk score model against historical outcomes.
Backtesting protocol:
Select a test cohort. Choose 200-500 learners who completed or dropped out of a program in the past 6 months.
Calculate historical risk scores. Apply your scoring model to their actual engagement data from the start of their enrollment.
Compare predictions to outcomes. For each learner, determine whether the risk score would have correctly predicted their completion or dropout.
Calculate accuracy metrics. Measure true positive rate (correctly identified dropouts), false positive rate (completers flagged as at-risk), and detection lead time (how many days before dropout the score reached High).
| Metric | Target | Action if Not Met |
|---|---|---|
| True positive rate (dropout prediction) | > 75% | Adjust signal weights |
| False positive rate | < 10% | Raise alert thresholds |
| Detection lead time (days before dropout) | > 7 days | Add earlier signal indicators |
| Moderate alert recovery rate (backtested) | > 25% | Adjust nudge email content/timing |
According to Brandon Hall Group's 2025 predictive analytics benchmarking, a true positive rate above 75% with a false positive rate below 10% represents a well-calibrated model. Models with false positive rates above 15% create alert fatigue that undermines the system's effectiveness because advisors stop trusting the alerts.
Models with false positive rates above 15% create alert fatigue that causes advisors to ignore legitimate warnings, according to Brandon Hall Group's 2025 research
Step 9. Deploy in Phases with Escalating Automation
Roll out the engagement alert system progressively to manage risk and build confidence.
Deployment phasing:
Week 1-2: Score calculation only (no alerts). Calculate risk scores daily for all learners but do not send any alerts. Review scores manually to validate the model against your intuitive assessment of learner engagement.
Week 3-4: Advisor alerts only (no learner-facing messages). Enable advisor alerts for High and Critical risk levels. Advisors receive alerts but learners receive no automated communications. Track advisor response rates and intervention outcomes.
Week 5-6: Add automated nudge emails for Moderate risk. Enable the automated nudge emails for Moderate-risk learners. Monitor click-through rates, opt-out rates, and re-engagement rates.
Week 7-8: Full automation with escalation. Enable the complete escalation workflow including multi-channel nudges, advisor notifications, manager escalation, and client admin notifications.
| Phase | Alerts Active | Learner Impact | Staff Impact | Duration |
|---|---|---|---|---|
| Score only | None | None | Score review (2 hrs/wk) | 2 weeks |
| Advisor alerts | High + Critical | None | Alert response (5 hrs/wk) | 2 weeks |
| + Nudge emails | Moderate | Engagement nudges | Nudge monitoring (3 hrs/wk) | 2 weeks |
| Full automation | All levels | Complete workflow | Exception review (2 hrs/wk) | Ongoing |
According to ATD's 2025 implementation data, organizations that deploy engagement alerts in phases experience 60% fewer post-launch issues than organizations that activate all components simultaneously.
Step 10. Monitor, Optimize, and Report
After full deployment, establish ongoing monitoring and optimization cycles.
Weekly monitoring dashboard:
| Metric | Target | Warning Threshold |
|---|---|---|
| Learners at Moderate risk | < 15% of active learners | > 20% |
| Learners at High/Critical risk | < 5% of active learners | > 8% |
| Nudge email re-engagement rate | > 30% | < 20% |
| Advisor response time (from alert) | < 4 hours | > 8 hours |
| False positive rate (monthly) | < 10% | > 15% |
| Course completion rate (rolling 90-day) | Improving or stable | Declining |
| Alert-to-intervention conversion rate | > 80% | < 60% |
Monthly optimization activities:
Review false positive cases. Identify learners flagged as at-risk who completed without issues. Adjust thresholds to reduce unnecessary alerts.
Analyze intervention effectiveness. Determine which intervention channels (email, SMS, phone) and which message content achieves the highest re-engagement rate.
Adjust signal weights. If certain signals are producing more false positives or negatives, rebalance the composite score weights.
Update nudge email content. Refresh the automated message templates based on performance data and learner feedback.
Generate stakeholder reports. Produce monthly reports showing completion rate trends, intervention volumes, and at-risk learner outcomes for program directors and client administrators.
According to Brandon Hall Group's 2025 ongoing operations data, engagement alert systems that are optimized monthly achieve 15% better outcomes than systems that are deployed and left unchanged. The optimization primarily improves false positive rates and nudge email effectiveness, both of which degrade over time as learner behavior patterns evolve.
For organizations already using workflow automation in other departments, the monitoring and optimization patterns for engagement alerts are identical to those used for any automated workflow: measure, adjust thresholds, refine content, and review escalation effectiveness.
Expected Results by Organization Size
| Metric | 500 Learners | 2,000 Learners | 5,000 Learners | 10,000 Learners |
|---|---|---|---|---|
| Alerts generated daily | 15-30 | 60-120 | 150-300 | 300-600 |
| Nudge emails sent daily | 8-15 | 30-60 | 75-150 | 150-300 |
| Advisor alerts daily | 3-8 | 12-30 | 30-75 | 60-150 |
| Expected completion rate improvement | +8-12% | +10-15% | +12-18% | +15-20% |
| Staff time for monitoring | 2-3 hrs/wk | 3-5 hrs/wk | 5-8 hrs/wk | 8-12 hrs/wk |
| Staff time saved vs. manual monitoring | 8-12 hrs/wk | 15-22 hrs/wk | 25-35 hrs/wk | 40-55 hrs/wk |
Automated engagement alerts save 15-55 hours per week in manual monitoring effort depending on learner population size, according to ATD's 2025 operational benchmarking
According to NCES 2025 data, the completion rate improvement from automated engagement alerts correlates strongly with organization size because larger organizations have a wider gap between their current (under-resourced manual monitoring) capabilities and what automation enables. Small organizations with dedicated instructors who already know each learner personally see smaller improvements because they are starting from a higher engagement monitoring baseline.
Frequently Asked Questions
How quickly does the engagement alert system start showing results? According to Brandon Hall Group's 2025 data, organizations typically see measurable completion rate improvement within 60-90 days of full deployment. The first 30 days establish baseline metrics, and the impact becomes visible as the first cohort of intervened-upon learners reaches their program completion milestones.
What is the ideal risk score calculation frequency? Daily calculation is the recommended standard, according to ATD's 2025 guidance. More frequent calculation (hourly or real-time) adds computational cost without meaningful improvement because intervention response times are measured in hours, not minutes. Less frequent calculation (weekly) misses the 48-hour intervention window that drives the highest recovery rates.
How do you prevent alert fatigue among advisors? According to Brandon Hall Group's research, the three most effective strategies are: (1) maintaining false positive rates below 10% through ongoing threshold calibration, (2) providing actionable context in every alert so advisors can act immediately without investigation, and (3) implementing alert batching that groups moderate-risk alerts into a daily digest rather than individual notifications.
Can engagement alerts work for self-paced programs without deadlines? Yes, but the progress velocity signal requires a different definition. For self-paced programs, velocity is measured against the learner's own historical pace rather than a fixed timeline. If a learner completed 3 modules per week for the first month and then drops to zero, the system detects the velocity change even without an external deadline. According to ATD's 2025 data, this relative velocity approach achieves 78% accuracy in self-paced programs.
What privacy considerations apply to engagement monitoring? According to FERPA guidelines (for US educational institutions) and GDPR (for organizations serving EU learners), engagement monitoring data is considered educational records subject to privacy protection. Learners should be informed during enrollment that their engagement data is monitored for academic support purposes. Data should not be shared outside the educational context, and learners should have access to their own engagement data upon request.
How does the system handle learners who are legitimately pausing (medical leave, work travel)? Configure an "approved absence" status that suppresses alerts for a defined period. According to ATD's 2025 guidance, the system should allow instructors, advisors, or learners themselves to flag planned absences. When the absence period ends, the system resumes monitoring with an adjusted baseline that accounts for the gap.
What is the cost of building an engagement alert system? Platform costs for US Tech Automations range from $200-$2,500/month depending on learner volume and workflow complexity. Implementation takes 4-8 weeks of staff time. The total first-year investment typically ranges from $8,000 (500 learners) to $45,000 (10,000 learners), with ongoing annual costs of $5,000-$30,000. According to Brandon Hall Group's 2025 ROI data, engagement alert systems deliver 3-7x annual return through improved completion rates and reduced learner churn.
Can the system integrate with our existing CRM or student information system? Yes. US Tech Automations connects to Salesforce, HubSpot, and most SIS platforms through REST API integrations. Engagement data and intervention records can flow to your CRM for unified learner lifecycle management. For organizations already managing customer follow-up automation through a CRM, adding engagement alert data provides a complete view of each learner's journey.
How do you measure the ROI of engagement alerts specifically? According to ATD's 2025 ROI methodology, measure the completion rate for learners who received automated interventions versus a control group that did not. The difference in completion rates, multiplied by the average enrollment value, gives you the revenue impact. Add the staff time savings from eliminated manual monitoring for the total ROI. Most organizations see 200-500% ROI in the first year, according to Brandon Hall Group's 2025 benchmarking.
Conclusion: Catch Disengagement Before It Becomes Dropout
The 10-step process outlined in this guide transforms learner engagement monitoring from an impossible manual task to a systematic, automated workflow that catches disengagement within 48 hours and intervenes before learners abandon their programs. The investment in signal framework design (Step 1), risk score calibration (Step 3), and thorough backtesting (Step 8) ensures that your alerts are accurate enough to trust and actionable enough to drive results.
Request a free consultation with US Tech Automations to assess your current engagement monitoring capabilities, review your LMS data availability, and receive a customized implementation plan for your learner population and program structure. The consultation includes a historical data analysis estimating the number of at-risk learners your current process is missing and the completion rate improvement you can expect from automated engagement alerts.
About the Author

Helping businesses leverage automation for operational efficiency.