SEGWay March 2014

2013 Logo 3d gradient 
SEGway
 
SEG Measurement Newsletter SXSWedu Special Issue (March 2014)
In This Issue
 
Like us on Facebook
Follow us on Twitter
View our profile on LinkedIn
Find us on Google+
View our videos on YouTube

31 Pheasant Run
New Hope, Pennsylvania 18938
(267) 759-0617
 

All of us at SEG Measurement are excited to be a part of the SXSWedu conference in Austin, Texas.  Although we have the opportunity to attend many conferences during the year, SXSWedu holds a special place for us.

While SXSWedu offers the usual array of presentations, workshops and demonstrations, it also captures the spirit of education and the possibilities it offers for a better future. We join SXSWedu in celebrating educational innovation and its potential to improve student learning.

Effectiveness Research goes hand in hand with educational innovation.  Innovation is important to the future of education; but, if it is not accompanied by scientific effectiveness research, we do not know if those new ideas actually "work." We need new ideas in education.  At the same time, students have the right to an evidence-based education--an education that we know is proven effective.

Through SEGway, we deliver important information about efficacy research and assessment that is a direct reflection of conversations we've had with thousands of educational leaders and educators over the past several months.  Without our interactions with our extended education family we would not be able to provide this valuable information and our team could not develop the high-quality solutions we offer. Your feedback and shared experiences power us to move forward.  

In the last issue we told you about SEG's scholarship for attendance at the SXSWedu Conference.  We are sponsoring Melissa Contreras, a Social Studies teacher with Austin ISD, so she can experience the energy and excitement of the conference and share this valuable information with her colleagues who cannot attend. Melissa has opened up a Twitter account @MrsMelContreras in order to keep us apprised of which sessions she's attending and any key takeaways regarding new ideas to implement in the classroom.  Please follow her if you are on Twitter. 
 
In this newsletter, we also include a brief report on an interesting digital learning tool from Cengage Learning that is meeting with success.   
 
Our Measurement Moment shares an interesting perspective on the US's continued use of the traditional "British" system of Measurement and our Technical Corner offers useful information about the interpretation of Effect Size. 

We encourage you to learn more about our work in Assessment and Efficacy Research by connecting with us at SXSWedu, or at the upcoming AERA and NCME conferences in Philadelphia in April.  Also, take a look at our homepage at www.segmeasurement.com, as it is continually updated with developments within the education marketplace.  

As always, feel free to email me at selliot@segmeasurement.com.  I'd love to hear what's on your mind.

 

Sincerely,

 

Scott Signature

 

 

Scott Elliot

 

SEG Measurement

 

SEG Measurement at SXSWedu 
Austin ISD teacher, Melissa Contreras, in attendance at SXSWedu

 

SEG Measurement has been a part of the education marketplace for quite some time.  With SXSWedu becoming a major contributor to the education conversation, we also wanted to contribute by sponsoring conference access for a teacher.  Mrs. Contreras, a Social Studies teacher at Akins High School in Austin, Texas and 16 year veteran of the profession, was selected.  You can follow her thoughts about the conference on Twitter @MrsMelContreras.

 

From Melissa Contreras...

"I'm looking forward to the exposure to technology, and its potential impact in the classroom.  Things are changing rapidly within the system of education and I want to be an educator that embraces these changes and understands their potential impact on student learning.  Innovation shouldn't be a scary proposition, nor should we innovate for the sake of innovation.  At SXSWedu, I plan on absorbing as much information as possible in order to elevate my effectiveness as an educator and to relate to student experiences and perception at a much higher level.  Being an educator is a role I take very seriously, and I'm grateful to SEG Measurement for allowing me to have this experience at SXSWedu."

 

Featured Project
College Students Learn More Using Cengage Write Experience
 
It is no surprise that digital learning tools are becoming more common in post-secondary classrooms. As colleges seek to enhance their educational offerings, they are looking for technology-based solutions.
 
While there are many claims about the effectiveness of digital tools, Cengage Learning decided to put their Write Experience tool to the test.  SEG Measurement and MarketingWorks were asked to conduct a study to evaluate the effectiveness of Write Experience in 2012. This study will be reported at the American Educational Research Association in April 2014 (see conference listing below).
 
Cengage Learning's Write Experience is a digital tool that is designed to help students improve their writing skills.  Write Experience allows instructors to assess students' written communication skills using artificial intelligence and provide students with detailed revision goals and feedback on their writing to help them improve. 
 
Approximately 970 students and their instructors participated in a semester-long quasi-experimental study of Write Experience.   One group of students and instructors (treatment group) used the Write Experience application during the semester and a second group of students (control group) did not use Write Experience. Both groups received a pre and post test.
 
Students using Write Experience showed significantly greater growth in writing skills on the posttest than students not using Write Experience.  Any initial differences in ability were controlled for statistically.  The Effect Size was .32, suggesting that Write Experience was a substantial factor in improving students' writing.
 
For a copy of the Write Experience report, please write us at: info@segmeasurement.com.
 

Measurement Moment
United States Only Industrialized Country to Use "English System" of Measurement
outer_space2.jpg

Ninety five percent of the world use the metric system for measurement.  Yet, the USA continues to use the "English System" of Measurement.  We still rely on measures of inches, feet, yards, and miles for distance, measures of cups, pints, and  quarts for liquids, and ounces and pounds for weight.  

 

The metric system, officially introduced in 1799, has become the standard for nearly every country in the world.  It is arguably much easier to use since it is a Base 10 system--all units are multiples of 10 that are easy to "work with."  It has also long been a standard in the science community.

 

So, why does the US continue to use the English system?  Part of this can be explained by a natural (or common) resistance to change.  But, there is likely more to this. Most of us in the US have developed an internalized understanding of the English units, that we don't have for the metric system.  When someone says that it is about 2 feet long, or that recipe requires two cups, we quickly understand the quantities involved.  Over time we have a clear referent for those measures. Because most of us do not use the metric system on a daily  basis, we struggle more with understanding the concept of a meter; often, we try to convert this to more familiar measures (feet) taking added time and likely adding some error.  Our intuitive understanding of English measures and the "extra work" associated with conversion to metric, makes it difficult for us to make the move.

 

This same issue causes significant problems when we move to mental measurement. Students, teachers, parents and others in education often gravitate towards measuring students on the percent correct scale. They have a belief that milestones on that scale have specific meanings.  Take for example "75% is a C" or "80% is passing."  The trouble is, these beliefs may be incorrect.  80% on an easy test may not show as much ability as 60% on a hard test.....but that is a story for an upcoming issue of SEGway...

Matrix
Technical Corner
Size Matters: Interpreting Effect Sizes
 
godly-enter-key.jpgThose of us trained in educational research "back in the day," were taught that the most important part of results reporting was the comparison of the results achieved to a predetermined level of statistical significance, typically .01 or .05. But as with most things, times have changed.  
 
Most major Psychological Journals now require, or strongly urge, reporting Effect Size, first and foremost. They encourage following the report of Effect Size with the reporting of the actual p-value (probability that the results were due to chance alone) of the statistical tests conducted.
 
Effect Size refers to the size, or magnitude of the difference observed in a statistical test.  Simply put, this is the statistic that answers the question: How large was the relationship between the variables observed or how large were the differences between the study groups? 
 
Much of the educational research evaluating product or program effectiveness compares the group using the product (participating in the program) to a control group that is not participating in the program or using the product.  The difference between the average or mean of each of the two groups on an outcome measure (test, survey, etc.) is reported.  This is typically reported on a common metric, known as the Effect Size (Cohen's d or similar statistic).  While reporting the effect size using a consistent scale offers many advantages, we have to ask, how strong is this effect and how big of an effect is big enough?
 
Jacob Cohen (1988), arguably the most important statistician to explore the concept of effect size early on tells us that .20 is small, .50 is medium and .80 is large.  While these guidelines are often quoted to assist in interpretation, they are arguably out of step with expectations for educational research. Even Cohen (1988) emphasizes the importance of interpretation within context. There are several issues in education that provide context for interpretation.
 
First, our treatment-control comparisons are often overstated.  Unlike a medical study where the control group is completely denied access to the control, educational control groups are typically exposed to some form of instruction, perhaps even a competitive product to that tested. In short, they really are not a pure, isolated control in the purely scientific sense.  For this reason, we would expect the magnitude of the observed effects to be smaller.
 
Second, education is a complex enterprise.  There are so many influences on students including background, home environment, peers, and parents to name a few.  We need to adjust our expectations to recognize that any single product or program is likely to have a more modest effect on educational outcomes.
 
A third consideration providing context for the interpretation of effect sizes is historical results. While I have not computed a true mean across the hundreds of studies we have conducted, I would venture to say that the average observed effect size is about .20.  Slavin and others in summary reports of various reading and math programs report average effect sizes not far off from this number.
 
For these reasons, I would argue that we should consider .20 to be a good solid effect.  Findings of .10 should be considered meaningful and those at .40 and above should be considered very strong. But, of course, it depends on context.
 
Admittedly, this article does not do justice to the topic of effect sizes; literally thousands of pages have been written on the topic.  We hope that this discussion has provoked some thought on the subject and has provoked further study.

To get an idea of typical effect sizes observed and the contexts,  check out SEG's research reports here on our website.

SEG Measurement can help you design your efficacy research study to maximize success.  Call us today to find out how we can help you with your assessment and research needs.


About SEG Measurement
medical-team-portrait.jpg

SEG Measurement conducts technically sound product efficacy research for educational publishers, technology providers, government agencies and other educational organizations, and helps organizations build better assessments. We have been meeting the research and assessment needs of organizations since 1979. SEG Measurement is located in New Hope, Pennsylvania and can be accessed on the web at

 

 SEG At Upcoming Conferences
Let's Meet!

We were pleased to see many of our colleagues at the Texas Computer Education Association Convention (TCEA) and the Pennsylvania Educational Technology Exposition (PETE) last month.  Interest in educational products and services at the district and state level are increasing and we are seeing a wave of new innovations. We look forward to seeing you at the upcoming conferences we will be attending.

  • SXSWedu, March 3-6, Austin, TX
  • National Council on Measurement in Education Annual Meeting (NCME), April 2-6, Philadelphia, PA
  • American Educational Research Association Annual Meeting (AERA), April 3-7, Philadelphia, PA:

If you would like to meet with a representative from SEG Measurement to discuss how we might help you with your assessment and research needs, please contact us at info@segmeasurement.com.

Join Our Mailing List!