Connecting Resources with Groundbreaking Medical Technology

Research and Development 

At SportGait, we strive to deliver innovative medical technology to individuals

Goals of SportGait Consortium

The goal of our consortium is to develop a reliable, scalable, and effective collaboration, enabling our partners to:

  • Evaluate assessment tools that screen for injury, risk factors and baseline functioning
  • Investigate and identify injury and recovery patterns
  • Analyze and establish effective treatments that ensure safe return to play (prognosis)
Scientific Criteria for selecting the assessments in the SportGait platform

Why Use Peer Review Process For Research

Science is defined as the accumulation of knowledge, and a critical part of that knowledge is the peer review process. For the scientific community, peer review is essential as it allows scholarly work to be reviewed by journal editors and reviewers who are experts in the same field (Kelly et al., 2014), and that level of scrutiny to vet the quality research has a long history (Spier, 2002). Peer-reviewed publications are one of the most trusted and relied upon forms of scientific communication, and the process helps ensure that the quality of the research and that the validity of the findings are at a high standard.

Specific Criteria Used In Our Selection Process

High Test-Retest reliability (the typical standard in assessment is r-values of approximately 0.7 or higher)
  • Reliability is the instrument’s capacity to produce consistent results over time.
  • Without a reliable measure, it is impossible to determine if the change in score is due to the patient or due to the inconsistency of the instrument
Established validity
  • Validity is the scientific verification that the instrument measures what it claims to measure.
  • Failure to properly assess each domain will result in an inaccurate assessment.
Large Stratified normative data for both age and gender.
  • Normative data is a representative comparison group that allows one to interpret the meaning of a score relative to others.
  • Good norms allow test scores to be interpreted even in the absence of baseline data and they help us understand functioning relative to others.
Selecting tests that minimize practice effects.
  • Practice effects are when test scores improve simply from repeatedly taking a test.
  • Good assessments should not show practice effects.
Selecting tests that minimize ceiling effects.
  • Ceiling effects are when it’s too easy for people to score at or near the highest score on a test and this is a common occurrence for tests that show practice effects.
  • Ceiling effects can mask actual changes when the test is too easy.
Including tests that assess multiple domains
  • Failure to assess multiple domains will necessarily result in an inaccurate (incomplete) assessment.

Assessments Selected for the SportGait Platform

The SportGait Battery is defined by each of the component parts (reviewed separately).

Neurocognitive Assessment

What did we select?

The Conners Continuous Performance Test (Conners CPT) is an interactive cognitive test measuring processing speed, variability in attention, impulsivity, distractibility, and rapid decision making. The Conners CPT 3 (and its predecessors) has been established as a well-validated tool that has been used by neuropsychologists and medical professionals for decades to assist in diagnosing neurocognitive (most typically attention) deficits (e.g., Conners, 2014; Frazier et al., 2004).

There have been three ostensibly similar versions of the Conners CPT developed over the years, and the previous and current version of the test (i.e., Conners CPT, Conners CPT 2, and Conners CPT 3) provide information regarding such cognitive factors as the speed of mental processing, attention–inattention, response inhibition, variability in responses, and errors of omission and commission. Moreover, continuous performance tasks produce internal reliability coefficients ranging from .74 to .92 for each of its subscales (Riccio et al., 2002) and speed of mental processing appears to be an especially robust component of the Conners CPT 3 (e.g., Shaked et al., 2019).

Why was the CPT selected for our platform?

The broader research literature has demonstrated the Conners CPT 3’s strong psychometric characteristics:

Test-Retest Reliability:

  • 0.70 to 0.90 (Hui-Chen et al., 2009)
  • 0.92 to 0.94 internal reliability for clinical and nonclinical samples (Conners et al., 2018; Multi-Health Systems Inc., n.d.)

Normative Data: stratified by age and gender (Conners et al., 2018; Multi-Health Systems Inc., n.d.)

  • Age Range:
    • Desktop: 5 years to 85 years (Conners et al., 2018; Multi-Health Systems Inc., n.d.)
    • Mobile: 8 years up to 82 years
  • Gender: Male, female, and nonbinary

Practice and Ceiling Effects are not observed with the Conners CPT 3 (Conners et al., 2018; Multi-Health Systems Inc., n.d.). The built-in practice trial minimizes practice effects.

Neurobehavioral Assessments

What did we select?

The National Institutes of Health (NIH) 4-Meter Gait Test  is a gait test commonly used to assess walking speed over a short distance. Gait involves dynamic postural control, where the individual changes their base of support throughout the movement. Gait slowing effects have been shown to be remarkably reliable and durable over time (Parker et al., 2006).

The 4-meter gait test has proven to be remarkably reliable, consistently yields some of the highest reliability coefficients, with inter- class correlation coefficients (ICC) values of .96 to .98 for adults (Peters et al., 2013) and ICCs of .81 for adolescents (Alsalaheen, 2014).

Why we selected it

The research has demonstrated the NIH 4-meter test’s strong characteristics:

Test-Retest Reliability: 0.96 to 0.98 for adults (Peters et al., 2013); 0.81 for adolescents (Alsalaheen, 2014)

Normative Data: stratified by age and gender (National Institutes of Health)

  • Age range: 5-85 years from NIH Toolbox (Kallen et al., 2012)
  • Gender: Male and female

No documented ceiling or practice effects with repeat testing.

What did we select?

The Balance Error Scoring System (BESS)  is an assessment to measure static balance and postural control. Postural control is an individual’s amount of sway, with greater neuromotor control associated with less postural sway.

The full BESS has been shown to have reliability ranging from moderate to good (Chang et al., 2014), with reliability coefficients of .7 achieved in a sample of children and adolescents, when using separate norms for men and women (Mcleod et al., 2006; note: test–retest values of more than .90 can be obtained when taking the average of multiple BESS administrations in one day; Broglio et al., 2009.)

The computerized BESS, which automates and facilitates the delivery and timing of the full BESS, was more recently developed to provide a more objective and quantitative method of assessing balance errors. Comparisons of the computerized BESS to standard scoring procedures have shown that computerized BESS scoring is more sensitive in its measurements of performance and postural stability than scores calculated from traditional motion capture data alone (Alberts et al., 2014). Additionally, inter-rater reliability between computerized and standard human-calculated BESS scores have been found to range from fair to excellent (0.44-0.99) across the six main stances (Caccese & Kaminski, 2014; cf. Houston et al., 2019). With respect to validity, the computerized scores are generally equivalent to human-rated scores across balance conditions (Glass et al., 2019).

Why we selected it

The following research has demonstrated strong characteristics of the BESS:

Test-Retest Reliability: 0.70-.90 in children and adults (e.g., Mcleod et al., 2006)

Normative Data: stratified by age and gender (e.g., Iverson & Koehle, 2013).

  • Age range: 5 to 23 stratified by age and gender, age 24+ stratified by age (e.g., Iverson & Koehle, 2013).
  • Gender: Male and female

No documented practice or ceiling effects.

The BioKinetoGraph

What we selected

The BioKinetoGraph

The broad research literature clearly indicates that gait speed assessed using accelerometers has superior sensitivity to dysfunction relative to manually collected gait speed data (Maggio et al., 2016). Moreover, accelerometers can produce reliable outputs across a range of clinical populations (Byun et al., 2016; Fujiwara et al., 2020; Henrikson et al., 2004; Kluge et al., Kobsar et al., 2016; Moore et al., 2017; Werner et al., 2020). Recent research introduced a noninvasive, continuous triaxial accelerometer data collection to create a visual depiction of gait, referred to as a BioKinetoGraph (BKG). Similar to the 12-lead electrocardiogram, BKG waveforms are related to specific neuromotor gait cycle events, including cadence, foot-strike, push-off, double stance time, and swing phase. Raw data are combined to generate BKG waveforms as gravitational accelerations over time. Tracings are used to obtain range of motion, amplitudes, and timing intervals for the various components of the waveform that relate directly to the gait cycle, and the adopted method is similar to that employed in other published studies (Godfrey et al., 2015). Sixteen variables organized into four conceptual domains: power (amplitudes), stride (timing intervals), balance (stability), and symmetry (regularity) are discussed, in part because these variables correspond to similar variables in the gait literature and were recently validated (Lecci et al., 2023).

This recent study (Lecci et al., 2023) introduced a method for analyzing raw accelerometer data (inertial measurement units) to reconstruct the gait cycle using BioKinetoGraph (BKG). They examined the association between BKG and NIH 4-meter gait and compare BKG to other neurobehavioral measures. Additionally, they examined whether footwear and walking surface influence gait (BKG) and evaluate test-retest reliability. In Study 1, a within-subjects design with 60 participants illustrated the effects of footwear (shoes/no-shoes) and walking surface (tile floor/grass) on BKG data, indicating the need to standardize both variables when measuring gait. Study 1 also established the BKG test-retest reliability (Pearson rs) for the no shoes/ tile surface condition ranging from .72-.91 (mean = .80).

Gait assessments should be standardized for footwear and especially walking surface. When standardized (no shoes/hard surface) BKG results in strong test-retest reliability.

Why we selected it

Summary of research demonstrating strong characteristics of the BKG:

Test-Retest Reliability: Gait accelerometers produce reliable outputs across a range of clinical populations (e.g., Byun et al., 2016; Fujiwara et al., 2020; Henrikson et al., 2004; Kluge et al., Kobsar et al., 2016; Moore et al., 2017; Werner et al., 2020).

  • 0.72 to 0.91 (mean = 0.80) for no shoes (Lecci, Dugan, et al., 2023)
  • mean = 0.79 for BKG mobile across four domains (power, stride, balance, and symmetry) in sample of over 4,000 subjects (Lecci, Williams et al., 2023).

Normative Data: stratified by age and gender

  • BKG mobile norms established by Lecci et al. (2023)
  • Age range: 5 to 78 years old
  • Gender: Male and female

No documented ceiling or practice effects.

How The SportGait Battery Works Together

Published research shows that SportGait’s battery (including neurocognitive and neurobehavioral measures) combine to achieve accuracies of 91 to 84.4% (AUCs of 1.0 and .947, respectively) when predicting the return to play decision making of a pediatric neurologist in a sample of 111 cases (Keith et al., 2019)
Published research shows the additive benefit of the different components of the SportGait Battery. The neurocognitive test alone (CCPT 3) accounts for 21.5% of the variance (d = 1.05) in symptoms, and the neurobehavioral measures (BESS and NIH 4-Meter Gait) account for an additional 11.5% variance (explaining 18.6% variance, d = .96, when entered first (Lecci et al., 2019; Lecci et al., 2021). These effect sizes are considered large to very large and reflect a marked increase (two to five fold higher) in predictive validity relative to existing measures commonly used in concussion assessments. Neurobehavioral and neurocognitive domains each provide unique and substantial information with respect to symptom endorsement.
References
  • Keith, J., Williams, M., Taravath, S., & Lecci, L. (2019). A clinician’s guide to Machine Learning in neuropsychological research and practice. Journal of Pediatric Neuropsychology, 5 (4), 177–187. https://doi.org/10.1007/s40817-019-00075-1
  • Lecci, L., Freund, C.T., Ayearst, L.E., et al. (2021). Validating a short Conners’ CPT3 as a screener: Predicting self-reported CDC concussion symptoms in children, adolescents, and adults. Journal of Pediatric Neuropsychology, 7, 169–181. https://doi.org/10.1007/s40817-021-00107-9
  • Lecci, L., Williams, M., Taravath, S., et al. (2019). Validation of a concussion screening battery for use in medical settings: Predicting Centers for Disease Control concussion symptoms in children and adolescents. Archives of Clinical Neuropsychology, 35(3), 265-274. https://doi.org/10.1093/arclin/acz041
Sportgait Research Consortium

Peer-Reviewed Research Studies

Discover

Why Choose SportGait Consortium?

Our consortium platform offers multiple benefits enabling you to receive acknowledgment for your published research papers.


We accomplish these benefits through the collaboration of Medical Providers, Academia, and Innovators

  • Collaborations with Experts

    SportGait Consortium gives you an opportunity to collaborate with leading academia, innovators, neuroscientists, mathematicians, medical providers, and others in the field.

  • Access to Advanced Technology

    We give you access to advanced technology such as IoT, gyroscopes, accelerometers, and others to identify and detect information regarding measurements and develop innovative assessment tools and technologies.

  • Purposeful Data Collection and Analysis

    SportGait facilitates purposeful collection and analysis of different data types through various means, including artificial intelligence (AI), big data, data science libraries, data cleansing, traditional statistical modeling, and machine learning.

  • Accomplish Process Standardization

    With SportGait, you can accomplish research process standardization. In addition, it enables you to boost data sharing through IRB approval, findings validation and dissemination, alpha and beta testing to conduct market studies.

Secure. Private. Compliant.

FDA Listed

CPT Cognitive is currently registered under 21 CFR Part 882.1470 - 510(K) Exempt, Class II.

BKG™ Gait is currently registered under 21 CFR 890.1600 - 510(k) exempt, Class I.

Additionally, SportGait completes regular external audits with AICPA SOC 2 Type II reports, also including legal

requirements with HIPAA, HITECH, and applicable state laws.

Compliance Logos