April 2016 SEGway

 
2013 Logo 3d gradient
SEGway       April 2016
Research and Assessment News from SEG Measurement  
 
In This Issue
 

Follow us on Twitter

 

View our profile on LinkedIn

 


31 Pheasant Run
New Hope Pennsylvania 18938
 
800.254.7670
 
 267.759.0617

Dear Scott,

I just returned from the ASU GSV Conference and am feeling optimistic about the future of education.  I saw many new products with great promise for helping students learn.  At the same time, I am concerned that there are so many companies making claims about product effectiveness with little evidence that they actually achieve better outcomes.  We remain firm in our belief that students and teachers deserve to use products that are technically sound and scientifically proven to be effective.  

It was also disappointing to see the" bad rap" that assessment is taking.  Understandably, many are concerned about the large amounts of instructional time being devoted to mandated assessments and the consequences of the heavy emphasis on mandated assessment. We continue to believe that well-developed, validated assessments can play a central role in the educational process.
 
Please take a few minutes to check out the articles in this issue of SEGway. We offer some important insights on different types of evidence that can be gathered to provide proof of product effectiveness.  We also provide an important discussion of test validity and continue to answer the questions you provide us. If there are questions about assessment or efficacy research that you would like to see answered, please let us know.

We encourage you to learn more about our work in assessment and efficacy research by connecting with us at conferences. We look forward to continuing the discussion with you through the newsletter, on Twitter (@segmeasure) and onLinkedIn (SEG Measurement).  
 
I am looking forward to seeing you in our home city of Philladelphia in June, at either the AAP-Content in Context or the CCSSO conference on assessment.  Please let me know if you would like to meet up while I am there!

Take a look at our website at www.segmeasurement.com, as it is frequently updated with developments in the field.  And, feel free to email me at selliot@segmeasurement.com. I always look forward to hearing from you.
 

Sincerely,

 
Scott Signature
 
 

Scott Elliot
SEG Measurement 

Determining The Right Type of Research for Your Product
Not all Studies and Research Reports Provide Support for the Effectiveness of Your Product
 
 
The educational community in general and buyers of educational products are increasingly sensitive to the need for sound proof of product effectiveness.  It makes sense that students should be exposed to proven solutions and that limited educational resources are spent wisely. 
 
But what constitutes sound evidence?  It is widely agreed that sound evidence requires direct collection of evidence about student outcomes achieved when using a given product.  Typically, a group of teachers or students are asked to use the product and information about the outcomes achieved through its use are collected.  These outcomes may often be compared to a control group not using the product and various measures of effectiveness may be collected both before and after product use.
 
Unfortunately, we have seen a lot of misleading and inaccurate information circulated suggesting that a background white paper or the like presenting the research underpinnings of the product is sufficient for demonstration of product effectiveness.  While we agree that documenting the research underpinnings of a product is an important first step in product development, assuming this is a demonstration of product effectiveness in any way, shape or form, could not be more wrong.  There is no substitute for studying the actual use of your product and measuring educational outcomes based on that use to establish proof of effectiveness.
 
SEG Measurement and several of our colleagues in the industry can provide you with the evidence you need to be successful and pass scientific muster.  When determining a research strategy for an educational product, there are many types of evidence and support for the effectiveness that can be considered. Each of these varies in terms of complexity, costs, time frame, and benefits. In many cases, a combination of initiatives will give the most complete picture of support for a product. In most cases, only one or two initiatives fall within the budget and therefore it is critical to select the strategy with the largest returns.
 
The options for gathering support for your product include, but are not limited to, the following:
 
  • Case study on a successful implementation to share details of best practices and insights regarding observed or perceived outcomes
  • Focus groups, cognitive labs, interviews, or observations to gather information about the usability of the product, perceptions of effectiveness, challenges, and suggestions
  • Detailed surveys of teachers, students, and administrators about product satisfaction and effectiveness
  • Post-hoc analysis of data to investigate outcomes for a user group and possibly comparing to a non-user group
  • Quasi-experimental controlled and monitored study comparing outcomes for a user group with a comparable non-user group
  • Experimental study with randomized class assignment comparing outcomes for a user group with a comparable non-user group
Again, all of these approaches include the direct study of your product; a supporting paper can be of value, but a paper that relies on an argument that "In theory, this product should work because we have based it on sound research" does not constitute evidence of effectiveness.   
 
Please be careful when planning your research strategy that you invest your research dollars on the type of investigation that will be accepted by your consumers, peers, and the research community.
SEG Measurement can help you devise the most effective research strategy for your product using one or more approaches. Please write to us at info@segmeasurement.com. We look forward to hearing from you!

Psychometrician's Corner
Test Validation: Supporting claims about test scores and test-based decisions 
 
When a test is administered, the scores/results are used to make a decision, such as: Did the student master the skills I just taught? Does the student have the required skills to move on to the next grade? Does this employee have the requisite skills to perform a particular job? Should this person be assigned remedial instruction? What are the gaps in knowledge in a particular area? How do my students compare to other students who learned this material?
 
In other words, you are making claims about what it means to achieve a certain test score, receive a passing grade, or make a decision made based on the test scores.  Test validation asks the question:  Can the claims you are making based on test be supported by evidence?  Does the test score really mean what you claim it means?
 
This relates directly to the extent to which we can accurately make the decisions we want to make based on the assessment results. Tests are developed for a specific purpose or purposes. Unfortunately, many test users and test creators simply assume that the test can be used for the purpose intended.  Yet without evidence that the test effectively can be accurately used for that purpose, we run the risk of making incorrect or inaccurate decisions. In short, it is critical to provide evidence that the test is actually living up to the claims you are making.  
 
Fortunately, there are many sources and ways to collect evidence supporting the validity of a test. Here are a few critical sources of evidence that we work with our clients to obtain:
  • Item quality
  • Reliability (consistency of scores) - We cannot have a valid use of a test if the test does not produce consistent results. So, reliability is necessary for validity.
  • Face validity - When stakeholders view the test with an understanding of its purpose, they should get an immediate impression that the test is suited for its purpose. This is a quick sanity check that the test appears to be measuring what it is intended to measure.
  • Construct validity - The test should cover the content and follow the underlying theory of the skills and content to be assessed.
  • Concurrent validity - The test should produce similar results to similar assessments or observations of the same measured outcomes.  
  • Predictive validity - The test should be accurate at predicting or relating to a related performance.
Some of the validity concerns can be addressed during the development of the test, the creation of the scoring/scaling rules, and the design of the report and feedback. In addition, conducting a pilot or field test of the form will also help to gather the necessary information to investigate the support for the validity of the assessment to be used for a particular purpose. Some additional external data including scores on similar assessments, course grades, or observation ratings may also be needed.
 
While it is tempting to repurpose an assessment that has already been developed, it is important to ensure that its use is valid before using it for a new purpose. Otherwise, there are large risks that someone could be accepted or rejected inaccurately, retained or promoted inaccurately, or inaccurately classified in some other way. There is too much at stake to not take the time to ensure that assessments are valid for the intended purpose. We should all ensure we are upholding the Standards for Educational and Psychological Testing.

Leave your psychometric worries to us.  We offer a complete suite of psychometric services.  Please contact us to help you plan and execute this work at info@segmeasurement.com or 800-254-7670. 

 
Recent Questions Asked by Educators and Publishers Considering Conducting an Efficacy Study
 
Here is the latest installment of this popular section of the newsletter with common questions and answers. If you have specific questions that are not covered here or if you are interested in exploring any of these questions further, please contact us.  We are here to help.
 
Do you help with recruiting participants?
We offer a full range of recruitment services to our research clients. We can take full responsibility for recruitment, we can help you design your recruitment strategy and communication plan for you to recruit, or we can simply use the participants you provide us with. Regardless of the recruitment execution plans, we will help you define the recruitment goals and sampling plan as part of the study design.
 
How do teachers benefit from participation in a study?
Teachers who participate in research studies are able to contribute to the development and evaluation of a product and provide direct feedback to the publishers. They are also able to provide valuable feedback, share any technology challenges, share lessons learned, and serve as an ambassador for innovation in their school. The publishers and the teachers have a shared goal to help the students learn in engaging, interesting, and effective ways and to support a seamless integration of new tools with current tools. As a further incentive, teachers and schools are often offered an extension of their current subscription or a free subscription or other materials/supplies for the school. We understand that the participants are taking their time to provide valuable feedback, implement a new program, or to implement their current instruction as a comparison group and that their time is valuable. All of the study participants are doing their part to provide the highest quality education to their students. The incentives provided to schools and teachers make it a valuable and rewarding endeavor for all participants.
 
How should we use the study results?
Even with the most positive of results, a completed study that is not shared is no more valuable than not having conducted the study at all. A study report has little value when it sits on the shelf. For maximum value, we encourage our clients to share the results on their website, with proposals for new implementations, through social media, through press releases, and through professional conferences. We can help you to spread the word and can also help to ensure that the messaging for study communication is accurate and clear. Many customers will require you to submit the written report of the study in support of the effectiveness of your product. In addition to sharing the results, it is best if the study is verified through a peer review process and presentation at a suitable professional conference or in a relevant publication.  
 
What questions do you have?  Let us know what questions you want answered.  Contactinfo@segmeasurement.com  to share your feedback.

Contact SEG today to find out how we can help you establish the effectiveness of your education product or service:
hrickert@segmeasurement.com     267-759-0617 ext 102

SEG At Upcoming Conferences
Let's Meet!
 
We are looking forward to seeing our colleagues and meeting new friends at the upcoming conferences.  We are participating in several sessions & we invite you to join us.
 
   Look for us at these upcoming conferences:
  • AAP Content in Context, June 6 - 8, Philadelphia, PA
  • CCSSO National Conference on Student Assessment, June 20 - 22, Philadelphia, PA
  • AACE EdMedia 2016, June 27 - June 30, Vancouver, Canada
  • NJEA Convention, October 10, Atlantic City, NJ
  We would love to meet with you and discuss how we can help you build strong assessments   and get the proof of effectiveness you need for success.  

  If you would like to meet with a representative from SEG Measurement to discuss how we     might help you with your assessment and research needs, please contact us        at info@segmeasurement.com.

About SEG Measurement
Building Better Assessments and Evaluating Product Efficacy
medical-team-portrait.jpg
SEG Measurement conducts technically sound product efficacy research for educational publishers, technology providers, government agencies and other educational organizations, and helps organizations build better assessments. We have been meeting the research and assessment needs of organizations since 1979. SEG Measurement is located in New Hope, Pennsylvania and can be accessed on the web at