Summer 2017

2013 Logo 3d gradient
SEGway               Summer 2017
Research and Assessment News from SEG Measurement  
In This Issue

Follow us on Twitter


View our profile on LinkedIn


31 Pheasant Run
New Hope Pennsylvania 18938
We have time to sneak in one more issue of SEGway before we take a publication break for the Summer. Enjoy this issue and we will return later in August with our Back to School Issue

As many of you know, SEG Measurement has two divisions, Efficacy Research and Assessment. While they each have their own areas of focus, both are critical to our clients' success. This issue includes an article on assessment and one on efficacy research; we hope you find both useful.

Research and assessment are certainly two critical and related pieces of the educational puzzle. We need effective measures to accurately assess student progress, determine instructional focus and to serve other educational purposes.  The success of the educational enterprise is also dependent on determining and implementing evidence-based solutions to ensure that teachers and students are using products and services that are truly proven to work.

We hope that you have had a good school year and that the next couple of months give you a chance to "recharge your batteries". Soon, we will all be busy planning for next year. We hope the information in this issue will help with that process.  We share our thoughts about how to properly develop an assessment and how to successfully recruit participants for a research study.

Take a look at our website at, as it is frequently updated with developments in the field, and email me at with any questions about item banks, assessments, or how to create your own marketing campaign with an effectiveness study. I always look forward to hearing from you.
Scott Signature

Scott Elliot
SEG Measurement 

A "Pile of Items" is Not an Assessment
When developing a bank of good test items is not enough
So, you have written a great bank of test questions to accompany your new product.  You made sure that you aligned them to the Standards, you had them reviewed by educators and editors.  But, this does not mean you have an "assessment." There are more tasks that must be completed.

You need to administer the items to a set of test takers similar to those who will ultimately take the test and conduct a psychometric analysis of the items. We can learn a lot by administering the questions to a group of students and examining how students answered them. 
Psychometrics is a scary word to many educators.  It evokes images of a mad scientist performing brain experiments on live patients.  Thankfully, psychometrics is, in reality, a lot less threatening. It is the science of measuring mental constructs like student knowledge, skills, and attitudes.  This science helps you evaluate the questions and the overall assessment. A proper psychometric analysis is also a professional requirement in the field of assessment and common expectation of buyers. 
How do we know if the items are any good?  Until we try them out on a group of students, we are really "guessing" at how good the questions are.  Psychometrics supports the review of each question to find out the level of difficulty, how effectively it discriminates, whether it "fits" with the underlying construct well and other item characteristics.   For example, we may find that nearly everyone is getting an item wrong, and perhaps that more students were picking one of the incorrect responses than actually picking the right answer.  The item looked pretty good at the outset, but now we can see that it is not very good at finding out the student's skill level and actually may be giving us little or no information. 
After reviewing the test questions, we still have several things we need to know. Is the test reliable--will it give a consistent score regardless of when it is administered and what form we administer?   A set of test items, without information about what the scores mean has little overall meaning. How many questions a student answers correctly (or the percentage correct) is affected not only by the test taker's ability, but by the difficulty (and other characteristics) of the test.   So, consider for a moment that we are giving a test covering fourth grade math skills.  If we have very hard questions, the student will likely answer few questions correctly; if we have very easy questions, the student will likely answer many of the questions correctly-- regardless of the student's ability! The number or percentage of questions the student answered correctly is not an effective way to determine the student's ability.  Raw scores are simply not a very good indicator of the student's skill level.  
Without proper statistical analysis, we have little understanding of what a given score on the test means.  A thorough psychometric analysis can help you understand the "true" meaning of these raw scores in relation to the underlying content area (construct) you are measuring. 
There is much more to constructing a good test.  This is just a small look at why there is a big difference between a pile of items and an effective assessment. Working with SEG, or other qualified testing organization, can help you make sure that your tests are of high quality. 

Let us know if we can help you with your assessment development by emailing us at

Recent Study Published in Learning & Technology Library 
Rainforest Journey Life Science Program Evaluation Peer Reviewed
Last month, we were honored to have our recent effectiveness study, "Increasing Science Skills and Interest Through the Use of an e-Learning Tool: A Mixed-Method Quasi-Experimental Evaluation," accepted at the Association for the Advancement of Computing in Education's EdMedia World Conference on Educational Media & Technology in Washington, DC. for presentation.  
We were excited to learn that the paper was peer reviewed and will be published in, the Learning & Technology Library (formerly EdITLib).
Peer review and presentation/publication are hallmarks of the scientific process and we were pleased to be able to obtain this important recognition and credibility for our client EdTechLens.  Credible, third-party evidence of effectiveness is critical to educational buyers and this study will go a long way towards supporting their marketing and sales efforts.
Please contact Scott Elliot at for more information about peer review, efficacy studies, or a copy of the report.  EdTechLens' press release about the Rainforest Journey Life Science Program efficacy study is available at Rainforest Journey Efficacy Testing Summary.

Five Tips for Recruiting Efficacy Study Participants

It should come as no surprise that school buyers are looking for hard evidence of product effectiveness.  With more and more educational products for schools to choose from and limited resources, buyers want to be sure they are spending their money well. 

One of the biggest challenges faced when conducting efficacy research is recruiting participants.   Since a good controlled study includes both users of the product under study and those not using the product, this can be particularly tough.

After more than three decades of conducting studies, we have learned a thing or two about this at SEG.  Here are five tips to make this task a bit easier and more effective.  Contact us if you want to know more about recruitment--or if you want to learn more about what is involved in conducting efficacy research.

  1.  Do the work up front to invite the most targeted potential participants. Make sure that you know ahead of time the type of schools or classes that are needed for the study.  Then be sure your recruitment is targeting folks who meet those criteria.  Sounds simple, but so much recruitment time and effort is wasted on reaching out to folks who should not be contacted.  Sometimes it may feel easier to do a widespread recruitment, but the extra work to conduct targeted recruitment is worth the effort.  Consider current clients that may be a good fit.  Ask for referrals from participants as they express interest.  Include prequalification questions in early contact so that folks can determine whether they qualify and may be interested. 
  2. Spread the word through many channels and make it easy for participants to find you.   Now that you have a targeted plan for recruitment, be sure to use many channels.  Potential participants are not going to seek you out, they need to be recruited.  Include emails, calls, social media, and direct mail.  And, make it very easy for interested participants to contact you.  If you are hard to reach or there are too many steps, potential participants will give up. 
  3. Highlight the reasons for the study and what will be done with the results.  This involves generating a sense of urgency and interest for folks to want to be involved in something so interesting and important.  In other words, if no one cares or shares an interest in the outcomes, it is going to be very difficult to arouse interest in potential participants.
  4. Show that you value the participants' time and energy.  In other words, be sure to incent the participants with a reasonable compensation for their time and effort in terms of product, money, or other resources.  Be sure to optimize the value to peak interest while also not making it so lucrative that folks are participating simply for the incentive. 
  5. Address and minimize any potential concerns or roadblocks to participation.  We all know that most schools and teachers are not excitedly waiting for the next research project.  Participation takes work and some dedication for success.  So, it is important that the responsibilities of the participants are made clear up front.  Clarify how data will be protected.  Explain how the time and inconvenience will be minimized.  Support any approval and review processes that are needed for each school. 
When you target the right folks, share a compelling message about the project, spread the word widely, incent folks appropriately, show that you value everyone's time, and help to remove roadblocks, you will be well on your way to study recruitment success!  And as your participant list grows, do not forget to over-recruit to accommodate the attrition that is expected to occur during the school year.   
SEG has worked with many educational publishers and technology providers, from start-ups to the largest industry players, to design & conduct efficacy research programs.
With nearly 40 years of experience in research, we know what it takes to conduct sound efficacy research.   Call or email us today at 800-254-7670 ext. 102 or for a cost-effective efficacy study by SEG Measurement.

SEG At Upcoming Conferences
Let's Meet!
We are looking forward to seeing our colleagues and meeting new friends at the upcoming conferences.  We are participating in several sessions & we invite you to join us.
 Look for us at these upcoming conferences:
  • Education Industry Symposium, SIIA Codie Awards, July 15 - 27, Denver, CO
  • EdNET Education Networking Conference, September 17 - 19, Scottsdale, Arizona
  • Advancing Improvement in Education Conference (AIE), October 4 - 6, San Antonio, TX
We would love to meet with you and discuss how we can help you build strong assessments and get the proof of effectiveness you need for success.  
If you would like to meet with a representative from SEG Measurement to discuss how we might help you with your assessment and research needs, please contact us at

About SEG Measurement
Building Better Assessments and Evaluating Product Efficacy
SEG Measurement conducts technically sound product efficacy research for educational publishers, technology providers, government agencies and other educational organizations, and helps organizations build better assessments. We have been meeting the research and assessment needs of organizations since 1979. SEG Measurement is located in New Hope, Pennsylvania and can be accessed on the web at