Search This Blog

Saturday, June 30, 2018

Without Pre-K, Louisiana Children Start Off at a Disadvantage | Editorial

From The New Orleans Times-Picayune / NOLA.com

By The Times-Picayune Editorial Board
June 29, 2018

Valerie Martinez works on a writing exercise while other students engage
in their own activities during Pre-K class at Audubon Charter School on
Friday, April 29, 2016. (Photo by Michael DeMocker, NOLA.com

Life is rough for tens of thousands of children in Louisiana. They can't count on having enough to eat, and their parents' jobs are shaky. Many of them deal with violence in their neighborhoods, and their schools are subpar.

In measurement after measurement, we don't do right by our youngest residents.


That harsh reality is reflected in the latest Kids Count report from the Annie E. Casey Foundation, which ranked Louisiana 49 th overall this year. That is one spot worse than in 2017.

We should be moving in the other direction.


Kids Count looks at dozens of statistical measures to determine its rankings. Louisiana is 50 th in economic well-being, with 314,000 children living in poverty and 393,000 children whose parents don't have job security. Our state is 47 th in education and 44 th in health. We have high rates of teenage deaths and babies who are born at a low weight.

Almost half of 3- and 4-year-olds in Louisiana are not in preschool of any kind. High percentages of fourth-graders have difficulty with reading, and a large percentage of eighth-graders aren't proficient in math.

The lack of preschool is one reason students get so far behind. And we're not doing nearly enough to remedy that.

The Legislature did continue funding for preschool programs for 4-year-olds in the budget passed in the special session that ended Sunday night. And lawmakers used $28 million in new federal funding to shore up the Child Care Assistance Program, which is the only state-run early education program for children under age 4.

The child care program provides subsidies for low-income children whose parents are working, in school or in job training. The new federal money should cover 4,000 of the 5,200 children on the waiting list for services. It will be the first substantial increase in funding for the program in nine years, according to Melanie Bronfin, executive director of the Louisiana Policy Institute for Children.


Rep. Steve Carter, a Baton Rouge Republican, had a good proposal this spring for new state funding. His House Bill 513 would have used $10 million a year for four years from the proceeds of the sale of unclaimed property for the early childhood fund. That would have helped further whittle down the waiting list.

But the state treasurer wanted to use the money in other ways, and the Senate rejected Rep. Carter's bill.

So, the state didn't invest any more of its own money into these vital pre-K programs for the coming budget year.

That needs to change next year. Lawmakers passed comprehensive legislation in 2012 aimed at improving the quality of preschool. But they haven't invested nearly enough money into that effort or into extending access to more children.

The Child Care Assistance Program was serving almost 40,000 children 10 years ago but has only had money for 15,000 in recent years. There are 140,000 low-income children age 3 and under in Louisiana who lack access to a publicly funded spot for child care or preschool, Ms. Bronfin said.

The state also has cut funding for trauma counseling for children. Former Gov. Bobby Jindal eliminated the state's Early Childhood Supports and Services program in 2012, doing away with mental health care for children under age 6 who had been exposed to violence. Then, in 2017, Gov. Edwards cut Medicaid-funded mental health services for children of all ages when the state was facing a budget deficit.

Investing in vulnerable children would improve their chances for success in school and in life. It would make it easier for their parents to work and strengthen Louisiana's economy.

It's also the right thing to do.

Abridged Autism Assessment Speeds Access to Therapy

From Spectrum News

By Jennifer Gerdts
June 5, 2018


Families traveling along a path toward an autism diagnosis experience delay upon delay.

Parents may question their first suspicions, primary care physicians generally don’t diagnose the condition, wait times at specialty clinics are extensive, and the evaluation process itself is comprehensive and lengthy.

Together, these factors create a lag time between a parent’s initial concern and an ultimate diagnosis — several years is not unusual. Not surprisingly, families report high levels of dissatisfaction and worry during this time, and long delays are a large source of stress (1).

Adding to families’ stress is the knowledge that early intervention is crucial. Evidence-based treatments such as applied behavioral analysis (ABA) have the most impact when children are young. Yet accessing ABA hinges on the diagnosis itself.

Those of us clinicians who specialize in autism want to provide the best care possible, of course. With families who have been waiting so long for their child to be evaluated, we want to take the time to do a thorough evaluation, to understand the child on multiple levels.


But we also need to examine our own diagnostic process and practice with a broader awareness of the families who are waiting to come in the door.

Our team is working to shorten the diagnostic process so that providers can see more individuals in a given time. We aim to shrink the waiting list so that families do not have to wait so long for answers.


Fast Service

Parents tend to notice something different about their child’s development between 18 and 24 months of age, but may feel uncertain and hesitate to bring up concerns to their pediatrician. Primary care providers don’t receive specialized training to confidently diagnose the condition (2). So more time passes as physicians refer children to specialty clinics for diagnostic evaluations.

Wait times at hospital-based specialty centers in the United States average 14 months from first phone call to the receipt of diagnostic feedback (3). Long wait times for appointments stem from two factors: a shortage of providers with the necessary expertise and the time it takes to do the evaluation — generally, between three and six hours.

The diagnostic odyssey takes even longer for families who are underserved by the U.S. healthcare system, such as those with lower incomes and members of racial or ethnic minority groups (4).

The field is chipping away at known contributors to this ‘diagnostic bottleneck.’ Large-scale campaigns such as First Signs are increasing awareness of autism’s early signs among parents and providers. Screening for autism is increasingly common in primary care practices, leading to clearer paths for referral to specialty autism centers.

Some autism centers have tweaked their processes to improve flow through the health system, resulting in shorter wait times for autism-related clinic appointments (3,5).

However, causes for delays during the course of diagnostic evaluation have largely gone unexamined. Various guidelines for best practices in diagnostic evaluations exist, and yet how exactly clinics follow these guidelines is unknown.

Ordinarily, a child or adult in need of an autism evaluation first sees the primary care provider, who then refers to outside specialists, such as a developmental-behavioral pediatrician or a psychologist.

The evaluation process generally spans several weeks, includes at least two clinic appointments, and often involves testing for autism-related behaviors, language and cognitive skills. It also typically covers neurodevelopmental and psychiatric diagnoses besides autism.

At the Seattle Children’s Autism Center, our group has developed a team evaluation model in which multiple providers see the family in a single day. Our process involves two clinicians with expertise in autism from different disciplines. Together, they make a diagnosis using criteria from the Diagnostic and Statistical Manual of Mental Disorders.”

Our process is also specific to autism. We defer questions about alternative diagnoses or comorbidities for follow-up appointments with us or other specialists.

Less is More

Last month, our group reported results from a comparison of our team model and standard approaches, in which psychologists or physicians lead the evaluation process. The standard approaches were in use at our center at the time we were rolling out the team model (6).

We reviewed medical records from 366 individuals seen in one of the three diagnostic tracks: 165 from psychology, 110 from physicians and 91 from team evaluations. Rates of autism diagnosis were similar across the tracks, ranging from 61 to 72 percent. But 90 percent of evaluations for children seen in teams were finished in a single day, compared with a series of appointments over several weeks for children in the other two tracks.

In addition, providers in our interdisciplinary teams billed nearly two hours less in total than those in the psychologist-led model (4.52 versus 6.31 hours), in which the psychologist collected relevant information about autism traits, completed testing and provided feedback over three to four separate appointments, generally a week or two apart.

Psychologists are the most common type of provider completing autism evaluations in the U.S., making this comparison particularly relevant.

Individuals seen by our interdisciplinary teams also were the most likely to engage in recommended follow-up care, even when they had to travel long distances to do so. And providers using our system were more satisfied working in teams than independently — which may help combat the burnout that often accompanies clinical work.

Our data support the idea that a team-based, focused approach to autism evaluation is feasible, effective and efficient. It should lead to shorter wait times for families and also succeeds in engaging families in recommended follow-up.

Still, it can be difficult for providers to do less. Autism specialists have been trained to assess the whole child and have expertise in an array of neurodevelopmental disorders. Yet in our program, clinicians must focus only on the question of autism.

To overcome this, we simply must remember the importance of meeting a need in our communities that far exceeds the number of providers with expertise in autism.

As providers, we must examine our own contribution to the diagnostic bottleneck. Our streamlined services model is one way to provide quality clinical service, not only for those in our care but also for those awaiting their turn.

Jennifer Gerdts is assistant professor of psychiatry and behavioral sciences at the University of Washington.

References
  1. Howlin P. and A. Moore Autism 1, 135-162 (1997) Abstract
  2. Fenikilé T.S. et al. Prim. Health Care Res. Dev. 16, 356-366 (2015) PubMed
  3. Austin J. et al. Pediatrics 137 Suppl 2, 149-157 (2016) PubMed
  4. Miller A.R. et al. Dev. Med. Child Neurol. 50, 815-821 (2008) PubMed
  5. Gordon-Lipkin E. et al. Pediatr. Clin. North Am. 63, 851-859 (2016) PubMed
  6. Gerdts J. et al. J. Dev. Behav. Pediatr. 39, 271-281 (2018) PubMed

Does Tailoring Instruction to “Learning Styles” Help Students Learn?

From the AFT's Newsletter
"Ask the Cognitive Scientist"

By Daniel T. Willingham, Ph.D.
Summer, 2018

Question: In 2005, you wrote that there was no evidence supporting theories that distinguish between visual, auditory, and kinesthetic learners. I still attend professional development sessions that feature learning-styles theories, and newer teachers tell me these theories are part of teacher education.

Is there any update on this issue?



Answer: Research has confirmed the basic summary I offered in 2005; using learning-styles theories in the classroom does not bring an advantage to students. But there is one new twist.

Researchers have long known that people claim to have learning preferences—they’ll say, “I’m a visual learner” or “I like to think in words.” There’s increasing evidence that people act on those beliefs; if given the chance, the visualizer will think in pictures rather than words. But doing so confers no cognitive advantage.

People believe they have learning styles, and they try to think in their preferred style, but doing so doesn’t help them think.

Different children learn differently. This observation seems self-evident and, just as obviously, poses a problem for teachers: How are they supposed to plan lessons that reach all of these different learners? The job might be easier if the differences were predictable or consistent.


If a teacher knew that, of the 25 students in her class, 12 learn thisway and 13 learn that way, she could plan accordingly. She could teach this way and that way to separate groups of students, or she could be sure to include some of this and that into whole-class lesson plans. The question is: What is this and that?

It’s fairly obvious that some children learn more slowly or put less effort into schoolwork, and researchers have amply confirmed this intuition. (1) Strategies to differentiate instruction to account for these disparities are equally obvious: teach at the learner’s pace and take greater care to motivate the unmotivated student. (2)

But do psychologists know of any non-obvious student characteristics that teachers could use to differentiate instruction?
Learning-styles theorists think they’ve got one: they believe students vary in the mode of study or instruction from which they benefit most. For example, one theory has it that some students tend to analyze ideas into parts, whereas other students tend to think more holistically. (3) 
Another theory posits that some students are biased to think verbally, whereas others think visually. (4)

When we define learning styles, it’s important to be clear that style is not synonymous with ability. Ability refers to how well you can do something. Style is the way you do it.

I find an analogy to sports useful: two basketball players might be equally good at the game but have different styles of play; one takes a lot of risks, whereas the other is much more conservative in the shots she takes. To put it another way, you’d always be pleased to have more ability, but one style is not supposed to be valued over another; it’s just the way you happen to do cognitive work.

But just as a conservative basketball player wouldn’t play as well if you forced her to take a lot of chancy shots, learning-styles theories hold that thinking will not be as effective outside of your preferred style.

In other words, when we say someone is a visual learner, we don’t mean they have a great ability to remember visual detail (although that might be true). Some people are good at remembering visual detail,5 and some people are good at remembering sound, and some people are gifted in moving their bodies. (6)

That’s kind of obvious because pretty much every human ability varies across individuals, so some people will have a lot of any given ability and some will have less. There’s not much point in calling variation in visual memory a “style” when we already use the word “ability” to refer to the same thing.

The critical difference between styles and abilities lies in the idea of style as a venue for processing, a way of thinking that an individual favors. Theories that address abilities hold that abilities are not interchangeable; I can’t use a mental strength (e.g., my excellent visual memory) to make up for a mental weakness (e.g., my poor verbal memory).

The independence of abilities shows us why psychologist Howard Gardner’s theory of multiple intelligences is not a theory of learning styles. (7) Far from suggesting that abilities are exchangeable, Gardner explicitly posits that different abilities use different “codes” in the brain and therefore are incompatible. You can’t use the musical code to solve math problems, for example.

Learning-styles theories, in contrast, predict that catering to the preferred processing mode of a student will lead to improved learning. So what does the evidence say?


Does Honoring a Student’s Learning Style Help?

There are scores of learning-styles theories, some going back to the 1940s. Enough research had been conducted by the late 1970s that researchers began to write review articles summing up the field, and they concluded that little evidence supported these theories. (8)

Research continued into the 1980s, and again, when researchers compiled the experiments, they reported that the evidence supporting learning-styles theories was thin. (9)

In 2008, Professor Hal Pashler and his associates reviewed the literature and drew the same conclusion, but they also noted that many of the existing studies didn’t really test for evidence of learning styles in the ideal way. (10)

For example, if you want to test the verbalizer/visualizer distinction, it’s not enough to show that visualizers remember pictures better than verbalizers do. Maybe those people you categorize as visual learners simply have better memories overall. You need to examine both types of learners and both types of content, and show that words are better than pictures for the verbalizers, and that the opposite is true for the visualizers.

The article by Pashler and colleagues prompted a microburst of articles on learning styles, but their warning that many prior studies were poorly designed went unheeded, and much of the recent research is uninformative. (11) Nevertheless, some studies are interpretable, and three published since 2008 claim support for a learning-styles theory.

For example, one group of researchers reported that active learners benefit more from brainstorming, whereas reflective learners benefit more from instruction and recall. (12) In another study, one researcher compared three modes of web-based instruction and reported differences in input-oriented and perception-oriented learners. (13)

But both articles had the same drawback; they used such a small number of experimental subjects (9–11 per group) that there’s a real chance the results were flukes.

The third experiment claimed positive results when testing psychologist Robert Sternberg’s theory of self-government. (14)

Sternberg describes some learners as “legislative,” meaning they like to be able to create their own learning experiences without restraints, so they would learn best when allowed to skip learning materials. “Executive” learners like to follow directions, so they would learn best with clear guidance about what to do and when to do it. And “judicial” learners like to judge things and compare them, so they would learn best with lots of materials that they can compare.

The researchers had subjects learn in an online environment with instruction matched (three groups) or mismatched (six groups) to their learning style. (15) The outcome measure was a little unusual—participants were asked to reflect on the material they had learned, and two raters evaluated the quality of these reflections.

The researchers reported better reflections from students when the instructional method matched their preferred style than when it did not, but a breakdown showing exact group performance was not provided.

So three studies show results with some promise for two different learning-styles theories, which indicates the theories merit further investigation. But 13 other published papers, testing five different learning-styles theories, in both natural settings and laboratories, show no support for learning-styles theories.


Although all of them tested students beyond the K–12 years, likely because that group was easiest for the experimenters to access, each theory predicts that differences would be observed in higher education settings.

As with the few studies showing positive results, the studies showing negative results are often imperfect (for example, some needed more participants). (16) But some experiments were carefully designed. For example, one study provides a straightforward, powerful test of the verbalizer/visualizer distinction. (17) In the study, 204 university students took a questionnaire meant to measure their proclivity to learn in one of four ways: visually, auditorily, via reading or writing, or kinesthetically. (18)


In the next phase of the experiment, participants heard 20 statements, read one at a time. Half of the participants were to rate each statement for how well they could form a vivid mental image based on the statement. The other participants were asked to focus on the auditory aspect of the statement by judging how well they could pronounce it. Participants were not forewarned that they would be tested on information from the sentences, but the third phase posed 20 questions about them.

Everyone got more questions right if they performed the imagery task (about 16 questions right), compared with the auditory task (about eight questions right). That result didn’t change at all if the questionnaire classified participants as more of a visual learner or more of an auditory learner.

In short, recent experiments do not change the conclusion that previous reviewers of this literature have drawn: there is not convincing evidence to support the idea that tailoring instruction according to a learning-styles theory improves student outcomes.


Now, you may protest that I’ve disparaged some studies as poorly done. I should also note that the research covers only some of the existing theories of learning styles. So maybe tailoring lessons to students’ learning styles could help, it’s just that no one has done a good experiment to show that? That’s possible, of course.

In fact, even if 100 terrific experiments failed to support the visual/auditory learner distinction, we could still say, “Well, maybe all 100 experiments were set up in the wrong way to show that learning styles do matter. Let’s try experiment number 101.” When it comes to scientific theories, you can’t prove a negative proposition beyond any doubt.

But “are we sure it’s wrong?” is a bad criterion. We should ask whether there is good evidence supporting the theory. After all, if we’re considering letting this theory influence classroom practice, we should be as sure as we can be that it’s true. It’s not enough to be able to say “we can’t be certain it’s false.”

Evidence That People Act on Their Learning Style

Research from the last 10 years confirms that matching instruction to learning style brings no benefit. But other research points to a new conclusion: people do have biases about preferred modes of thinking, even though these biases don’t help them think better.

Researchers used a clever task to show that verbalizers and visualizers do try to use their preferred mode of processing. (19) First, the experimenters created stimuli that could be verbal or visual: participants either saw an image with three features (for example, a blue triangle with stripes) or saw a verbal description of the features (“blue,” “stripes,” “triangle”).


The task they performed was a similarity judgement: a target figure appeared briefly, and then subjects saw two more figures and had to judge which one was more similar to the target. (The more similar figure always shared two of the three features.)

Both the target and the two choices could either be visual or verbal, so there were four types of trials: visual-visual, visual-verbal, verbal-visual, and verbal-verbal.

The experimenters measured brain activity while participants performed the task and found evidence that participants recode the target to match their learning style. The more someone reported being a “verbalizer,” the more likely they were to show increased activity in “verbal” parts of their brain (the left supramarginal gyrus) when they were presented with images.


The more they reported being a “visualizer,” the more likely they were to show increased activity in “visual” parts of their brain (the fusiform gyrus) when they were presented with words. It’s worth noting that the survey identifying participants as verbalizers or visualizers was administered at least two weeks before the experiment.

The experimenters wanted to ensure that people doing the task didn’t act in accordance with a style simply because they had just finished the survey, which may have made them think about being a verbalizer or visualizer.

So this result shows that people actually act on their reported preference, changing a task so they can think in words or pictures as they like. But that doesn’t mean that changing a task to fit your style makes you think better.


An obvious prediction for a learning-styles theory would be that visualizers would be better at this task when the stimuli were pictures, and verbalizers would be better when they were words. But matching the task to individuals’ preferred learning styles didn’t predict task performance.

Other experiments exploring the verbalizer/visualizer distinction show the same pattern. Depending on their self-identified learning style, people seek out written instructions or diagrams,20 or look at one or the other type of information longer. (21) Similar data have been observed in the visual, auditory, and kinesthetic framework. (22)

Another example of people acting on their learning styles concerns the difference between intuitive and reflective modes of thinking. (23) Here’s a simple problem to illustrate the difference: “A small vase holds one white ball and nine red balls. A large vase holds 10 white balls and 91 red balls. From which vase should you randomly select a ball, if you hope to get a white one?”


Intuitive thinking is fast and uses simple associations in memory to generate an answer, so it would lead you to select the large vase. That vase has more white balls, so you figure you’re more likely to get a white one. The reflective mode of thinking is slower and relies on deeper, more analytic processing of available information. It would lead you to calculate the probability of drawing a white ball from each vase and ultimately to the correct answer, the smaller vase.

Everyone uses both modes of thinking at different times, but individuals are biased to start with one or another type of processing, especially if nothing in the environment (like instructions or a time limit) nudges them toward one or the other. (24) But most problems are not open to equally good solutions through either type of processing. Probability problems (like the vase example) are better solved through reflection, even if your bias is toward intuition.


Creativity problems that benefit from free association are better solved by intuition, not reflection. The data show that people do have some propensity to use one or another mode of thinking, but people would be better off if they didn’t; rather, they should use the mode of thinking that’s a better fit for the task at hand. (25)

This suggestion—tune your thinking to the task—assumes that people have the flexibility to process as they choose. To use an example from a different learning-styles theory, we’re assuming your status as a verbalizer can be overridden if you want to think about something visually. There’s evidence that’s true.


In a recent study, researchers asked participants to navigate virtual cities. (26) They found that verbalizers showed better memory for landmarks, but visualizers made more accurate judgments about the relative directions of city features.

In a second experiment, the researchers instructed people to act like a verbalizer or a visualizer. People were able to follow these instructions, and the results matched what happened when they let people process as they pleased: thinking verbally helped with landmarks, and thinking visually helped with direction.

Important to our purposes, the effect of instruction overwhelmed learning style; when told to process in a manner inconsistent with their preferred style, everyone showed the same memory effect.

We saw the same pattern in the experiment discussed earlier that used sentence memory to test the verbalizer/visualizer distinction. You can remember sentences by thinking visually or verbally, but there’s a huge advantage to the former strategy, and it works just as well no matter what your preferred style. (27)


In sum, people do appear to have biases to process information one way or another (at least for the verbalizer/visualizer and the intuitive/reflective styles), but these biases do not confer any advantage. Nevertheless, working in your preferred style may make it feel as though you’re learning more. (28)

But if people are biased to think in certain ways, maybe catering to that bias would confer an advantage to motivation, even if it doesn’t help thinking? Maybe honoring learning styles would make students more likely to engage in class activities?


I don’t believe either has been tested, but there are a few reasons I doubt we’d see these hypothetical benefits. First, these biases are not that strong, and they are easily overwhelmed by task features; for example, you may be biased to reflect rather than to intuit, but if you feel hurried, you’ll abandon reflection because it’s time-consuming.

Second, and more important, there are the task effects. Even if you’re a verbalizer, if you’re trying to remember sentences, it doesn’t make sense for me to tell you to verbalize (for example, by repeating the sentences to yourself) because visualizing (for example, by creating a visual mental image) will make the task much easier. Making the task more difficult is not a good strategy for motivation.

Let’s review the conclusions we can draw from this research before we consider the implications for education.

First, since the last major literature review in 2008, more experiments have been conducted to measure whether participants learn better when new content fits their purported learning style. The bulk of the evidence shows no support for style distinctions. This conclusion is in keeping with a great many prior findings. The following four conclusions are more tentative.

Second, there is emerging evidence that people have a propensity to engage in one style of processing over others. Only a few learning-styles theories have been tested this way, but there seems to be pretty good evidence for the idea that visualizers and verbalizers are biased to process information in their preferred style, and that people may be biased toward either reflective or intuitive thinking. These biases are not very strong, however.

Third, the type of mental processing people use often has a substantial effect on task success. Reflective thinking is much better than intuitive thinking for probability problems. Imagery is much better than verbalizing for sentence memory.

Fourth, people can control the type of processing they use. Someone may prefer to think intuitively when solving a problem, but they can think reflectively if something in the environment prompts them to do so, or if they recognize it’s the type of problem best addressed that way.

Fifth, there’s no evidence that overruling your bias in this way incurs a cost to thinking. In other words, visualizers may be biased to use visual imagery, but when verbalizers use it, they are just as successful in solving problems.

One educational implication of this research is obvious: educators need not worry about their students’ learning styles. There’s no evidence that adopting instruction to learning styles provides any benefit. Nor does it seem worthwhile to identify students’ learning styles for the purpose of warning them that they may have a pointless bias to process information one way or another.


The bias is only one factor among many that determine the strategy an individual will select—the phrasing of the question, the task instructions, and the time allotted all can impact thinking strategies.

A second implication is that students should be taught fruitful thinking strategies for specific types of problems. Although there’s scant evidence that matching the manner of processing to a student’s preferred style brings any benefit, there’s ample evidence that matching the manner of processing to the task helps a lot.


Students can be taught useful strategies for committing things to memory (29), reading with comprehension (30), overcoming math anxiety (31), or avoiding distraction (32), for example. Learning styles do not influence the effectiveness of these strategies.

Daniel T. Willingham is a professor of cognitive psychology at the University of Virginia. He is the author of When Can You Trust the Experts? How to Tell Good Science from Bad in Education and Why Don’t Students Like School? His most recent book is Raising Kids Who Read: What Parents and Teachers Can Do. For his articles on education, go to www.danielwillingham.com (link is external).

Endnotes

1. Michael Schneider and Franzis Preckel, “Variables Associated with Achievement in Higher Education: A Systematic Review of Meta-Analyses,” Psychological Bulletin 143 (2017): 565–600.

2. Lee J. Cronbach and Richard E. Snow, Aptitudes and Instructional Methods: A Handbook for Research on Interactions (New York: Irvington, 1977).

3. Richard J. Riding, Cognitive Styles Analysis (Birmingham, UK: Learning and Training Technology, 1991).

4. John R. Kirby, Phillip J. Moore, and Neville J. Schofield, “Verbal and Visual Learning Styles,” Contemporary Educational Psychology 13 (1988): 169–184.

5. Steven E. Poltrock and Polly Brown, “Individual Differences in Visual Imagery and Spatial Ability,” Intelligence 8 (1984): 93–138.

6. Peter E. Keller and Mirjam Appel, “Individual Differences, Auditory Imagery, and the Coordination of Body Movements and Sounds in Musical Ensembles,” Music Perception 28 (2010): 27–46.

7. Howard Gardner, “ ‘Multiple Intelligences’ Are Not ‘Learning Styles,’ ” Answer Sheet (blog), Washington Post, October 16, 2013, www.washingtonpost.com/news/answer-sheet/wp/2013/10/16/howard-gardner-mu...(link is external).

8. Judith A. Arter and Joseph R. Jenkins, “Differential Diagnosis—Prescriptive Teaching: A Critical Appraisal,” Review of Educational Research 49 (1979): 517–555; and Thomas J. Kampwirth and Marion Bates, “Modality Preference and Teaching Method: A Review of the Research,” Academic Therapy 15 (1980): 597–605.

9. Frank Coffield et al., Should We Be Using Learning Styles? What Research Has to Say to Practice (London: Learning and Skills Research Centre, 2004); Kenneth A. Kavale and Steven R. Forness, “Substance over Style: Assessing the Efficacy of Modality Testing and Teaching,” Exceptional Children 54 (1987): 228–239; and Vicki E. Snider, “Learning Styles and Learning to Read: A Critique,” Remedial and Special Education13, no. 1 (1992): 6–18.

10. Harold Pashler et al., “Learning Styles: Concepts and Evidence,” Psychological Science in the Public Interest 9, no. 3 (2008): 105–119.

11. Joshua Cuevas, “Is Learning Styles-Based Instruction Effective? A Comprehensive Analysis of Recent Research on Learning Styles,” Theory and Research in Education 13 (2015): 308–333.

12. Sheng-Wen Hsieh et al., “Effects of Teaching and Learning Styles on Students’ Reflection Levels for Ubiquitous Learning,” Computers & Education 57 (2011): 1194–1201.

13. Yen-Chu Hung, “The Effect of Teaching Methods and Learning Style on Learning Program Design in Web-Based Education Systems,” Journal of Educational Computing Research 47 (2012): 409–427.

14. See Robert J. Sternberg, “Mental Self-Government: A Theory of Intellectual Styles and Their Development,” Human Development 31 (1988): 197–224.

15. Nian-Shing Chen et al., “Effects of Matching Teaching Strategy to Thinking Style on Learner’s Quality of Reflection in an Online Learning Environment,” Computers & Education 56 (2011): 53–64.

16. See, for example, Sarah J. Allcock and Julie A. Hulme, “Learning Styles in the Classroom: Educational Benefit or Planning Exercise?,” Psychology Teaching Review 16, no. 2 (2010): 67–79; and Michael D. Sankey, Dawn Birch, and Michael W. Gardiner, “The Impact of Multiple Representations of Content Using Multimedia on Learning Outcomes across Learning Styles and Modal Preferences,” International Journal of Education and Development Using Information and Communication Technology 7, no. 3 (2011): 18–35.

17. Joshua Cuevas and Bryan L. Dawson, “A Test of Two Alternative Cognitive Processing Models: Learning Styles and Dual Coding,” Theory and Research in Education 16 (2018): 44–64.

18. See Neil D. Fleming, Teaching and Learning Styles: VARK Strategies (Christchurch, New Zealand: N. D. Fleming, 2001).

19. David J. M. Kraemer, Lauren M. Rosenberg, and Sharon L. Thompson-Schill, “The Neural Correlates of Visual and Verbal Cognitive Styles,” Journal of Neuroscience 29 (2009): 3792–3798.

20. Laura J. Massa and Richard E. Mayer, “Testing the ATI Hypothesis: Should Multimedia Instruction Accommodate Verbalizer-Visualizer Cognitive Style?,” Learning and Individual Differences 16 (2006): 321–335.

21. Tim M. Höffler, Marta Koć-Januchta, and Detlev Leutner, “More Evidence for Three Types of Cognitive Style: Validating the Object-Spatial Imagery and Verbal Questionnaire Using Eye Tracking when Learning with Texts and Pictures,” Applied Cognitive Psychology 31 (2017): 109–115; and Marta Koć-Januchta et al., “Visualizers versus Verbalizers: Effects of Cognitive Style on Learning with Texts and Pictures: An Eye-Tracking Study,” Computers in Human Behavior 68 (2017): 170–179.

22. Lamine Mahdjoubi and Richard Akplotsyi, “The Impact of Sensory Learning Modalities on Children’s Sensitivity to Sensory Cues in the Perception of Their School Environment,” Journal of Environmental Psychology 32 (2012): 208–215.

23. Jonathan St. B. T. Evans, “Dual-Processing Accounts of Reasoning, Judgment, and Social Cognition,” Annual Review of Psychology 59 (2008): 255–278.

24. Anthony D. G. Marks et al., “Assessing Individual Differences in Adolescents’ Preference for Rational and Experiential Cognition,” Personality and Individual Differences 44 (2008): 42–52.

25. Wendy J. Phillips et al., “Thinking Styles and Decision Making: A Meta-Analysis,” Psychological Bulletin 142 (2016): 260–290.

26. David J. M. Kraemer et al., “Verbalizing, Visualizing, and Navigating: The Effect of Strategies on Encoding a Large-Scale Virtual Environment,” Journal of Experimental Psychology: Learning, Memory, and Cognition 43 (2017): 611–621.

27. Cuevas and Dawson, “Test of Two Alternative Cognitive Processing Models.”

28. Abby R. Knoll et al., “Learning Style, Judgements of Learning, and Learning of Verbal and Visual Information,” British Journal of Psychology 108 (2017): 544–563.

29. Peter C. Brown, Henry L. Roediger III, and Mark A. McDaniel, Make It Stick: The Science of Successful Learning (Cambridge, MA: Belknap Press, 2014).

30. Danielle S. McNamara, Reading Comprehension Strategies: Theories, Interventions, and Technologies (New York: Lawrence Erlbaum Associates, 2007).

31. Sian L. Beilock and Daniel T. Willingham, “Math Anxiety: Can Teachers Help Students Reduce It?,” American Educator 38, no. 2 (Summer 2014): 28–32, 43.

32. Angela L. Duckworth, Tamar Szabó Gendler, and James J. Gross, “Situational Strategies for Self-Control,” Perspectives on Psychological Science 11 (2016): 35–55.

American Educator, Summer 2018 Download PDF (263.49 KB)


Friday, June 29, 2018

Inclusive Education is a Plus for Children of all Abilities

From The Seattle Times

By Ilene Schwartz
June 22, 2018

Research at the UW shows that children with and without disabilities do better in inclusive classrooms. The fear that some kids will be slowed down by kids with disabilities is just not true.


Starting in a little over a week, the Special Olympics USA games arrive in Seattle. We will be celebrating the accomplishments of more than 4,000 people with intellectual disabilities as they demonstrate their skills in gymnastics, swimming, track and field, and a variety of other events.

Although these athletes will be in the limelight July 1-6, many will return home to lives of segregation, exclusion and lack of opportunity. That can change, and we as educators, employers and citizens can be the agents of that change.


Inclusive education — providing children with disabilities the opportunities to learn alongside their typically developing peers — is the first step.

At the Haring Center at the University of Washington, we run an inclusive early childhood learning center for children of all abilities and backgrounds, from birth through kindergarten. Every day more than 225 children attend early intervention, preschool and kindergarten classes.

Some of these children have intellectual disabilities, some are gifted and some are learning to speak English. All of these children and their teachers work together to create an inclusive school.

Researchers at the UW have been studying inclusive education for more than 50 years. We have research evidence that shows children with and without disabilities do better in inclusive classrooms. The big fear is that typically developing children in inclusive settings are going to be slowed down by children with disabilities, but considerable research shows that’s not true. Individualization helps all.

In 1975, a revolutionary federal law, now called the Individuals with Disabilities Education Act (IDEA), was passed. This law guarantees students with disabilities a “free and appropriate public education” to be provided in the least restrictive environment possible.

Despite this law, we in Washington state (and across the country) are breaking our promise to students with disabilities and their families.

The number of children with disabilities who complete high school is abysmal: Only 58 percent of students in Washington state who have an identified disability complete high school. The employment numbers for these young adults after high school is even worse: Currently only 36.8 percent of adults with disabilities in Washington state are employed.

In many schools in Washington, children with disabilities are segregated from their typically developing peers. This means that because of a diagnosis and the need for extra assistance, they are removed from classrooms with their typically developing peers and placed in classrooms where, despite the best efforts of dedicated teachers, the expectations are often low and the instruction is subpar.

In many special-education classrooms, children and teachers do not have access to the same types of curriculum as their general education peers, and instructional assistants teach the lessons. In other words, students with the most significant learning needs frequently receive the majority of their instruction from people with the least training.

We can do better.


The Washington state constitution says the paramount duty of the state is to educate all children. Let’s begin now, by providing educators with the training and coaching they need to teach all of the children in their classrooms. This is not easy and does not happen in a weekend workshop.

It is a commitment to rethinking what success in school means. It is recognizing that special education is a service, not a place. There is nothing special about being required to leave your classroom to receive the instruction to which you are entitled.

Let’s start by remembering that every student with a disability is a general-education student first. In an attempt to harness the Olympic spirit and dream big, what if every school district in Washington started the 2018-19 school year by placing all kindergarten students, regardless of ability or background, in general-education classrooms?

Special-education services would follow those students and help them and their teachers meet their needs, and make meaningful progress toward important educational outcomes.

By identifying children by their strengths, rather than their labels, we can create schools that make everyone feel like champions.


Ilene Schwartz is a professor and chair of special education at the University of Washington. She is also the director of the Haring Center for Research and Training in Inclusive Education at UW.

Doubts, Confusion Surround Cognoa’s App for Autism Diagnosis

From Spectrum News

By Hannah Furfaro
June 27, 2018

The status of a phone application designed to diagnose autism has created confusion among scientists — and sowed skepticism about the app’s efficacy.


According to representatives of California-based Cognoa, the app’s maker, the tool is intended to radically reconfigure the speed and ease with which autism is diagnosed. The company announced in February that the U.S. Food and Drug Administration (FDA) has established that Cognoa’s software is a Class II diagnostic medical device for autism.

It turns out, however, that the agency has not cleared the app, also called Cognoa, for diagnosing autism — nor has it recognized the app as a Class II medical device. “This product is not FDA approved or cleared,” FDA spokesperson Stephanie Caccomo told Spectrum.

The FDA does allow companies such as Cognoa to market an app as a ‘medical device’ to diagnose conditions such as autism, as long as the companies make only limited claims about their app’s accuracy.

For Cognoa’s app, classification as a Class II device would be the first step in FDA approval. (A Class II designation indicates an intermediate level of risk to consumers; for comparison, a pacemaker is a Class III device and a sanitary napkin is Class I.)

Cognoa’s press release was widely covered in the media, however, and many scientists took its phrasing to mean that the FDA had approved the app.

“It kind of surprised me because I hadn’t heard of it before and then it’s popping out with FDA approval,” says Kevin Pelphrey, director of the Autism and Neurodevelopmental Disorders Institute at George Washington University in Washington, D.C. “If I had gotten a question from the audience, ‘Is anything FDA approved?’ I’d go, ‘Yeah, this one thing,’” he says.

Cognoa officials acknowledge that the app is not approved by the agency. Asked about the FDA’s statement, they offered a clarification of the phrasing in their press release.

“Cognoa has been determined to be a medical device intended to diagnose autism. The FDA did not say Cognoa is a Class II device, but we believe that is likely to be the ultimate classification,” says Brent Vaughan, chief executive officer of Cognoa. “We hope to obtain full FDA clearance by the end of 2018.”

In the meantime, many scientists are skeptical about the app’s utility.

The application is “physically beautiful,” says Catherine Lord, director of the Center for Autism and the Developing Brain at New York-Presbyterian Hospital. But, she says, “they haven’t published the data in scientific journals.” Lord developed two gold-standard tests for autism diagnosis: the Autism Diagnostic Observation Schedule (ADOS) and the Autism Diagnostic Interview-Revised (ADI-R).

When she learned of the confusion over the FDA classification, Lord had only this to say: “Oh dear.”


Machine Learning

Cognoa was founded in late 2013 by Dennis Wall, associate professor of pediatrics and psychiatry at Stanford University in California.

Wall explores machine learning’s potential in autism. The company’s stated goal is to develop an app based on machine learning that can help clinicians rapidly diagnose autism.

“The state of the art is dysfunctional because there are too few clinical practitioners to meet the demand,” Wall says. “The practice of detection, diagnosis and intervention needs to be reinvented — and it can be, through mobile technologies.”

The app’s technology builds on Wall’s research. In a 2012 study, for instance, he suggested that a set of 7 questions can diagnose the condition as accurately as the 93-question ADI-R (1).

An independent group of researchers was unable to replicate these results in a larger dataset more representative of the autism spectrum (2).

Then, in a 2016 study, Wall’s team tested the screen in 222 children who visited a developmental behavioral clinic at Boston Children’s Hospital. The tool correctly flagged nearly 90 percent of children with autism; it correctly cleared about 80 percent of those without the condition (3).

The results helped launch Cognoa, but the company has since developed its own algorithm. (Wall says he is an adviser to the company and does “not have control over their daily operations and direction.”) The app delivers a parent questionnaire with 22 items for children under 4 years and 25 for those aged 4 and older.

It includes questions such as: “Has your child had any developmental challenges so far?” and “Does your child imitate your actions?” Parents can upload videos of their child, which are then scored by at least three Cognoa analysts.

In 2015, the company funded a clinical trial of the app in 230 children, including 164 with autism. The trial measured how well the app stacks up against other autism screens, such as the Modified Checklist for Autism in Toddlers.

The data from the trial was published 7 May in Autism Research. The app performed better than other screening tools at distinguishing children who have autism from those who do not, according to lead investigator Stephen Kanne, executive director of the Thompson Center for Autism & Neurodevelopmental Disorders at the University of Missouri.

However, it did so only when the researchers included videos in their analysis. The parent survey alone does not perform better than other screens, Kanne says.

On its website, the company says the app has been clinically validated and is “trusted by 250,000 families.” This number includes everyone who has created an account and used the app, according to Courtney Calderon, a Cognoa spokesperson.

“Show Us the Data”

So far, the app can only notify parents whether their child is at risk for the condition; it does not provide a diagnosis, although Vaughan and Wall say this is the company’s ultimate aim for the app.

The tool is not meant to replace a clinician’s assessment, says Sharief Taraman, vice president of medical for Cognoa. In fact, he says, “we have a lot of interest from clinicians,” who want to incorporate the app into their practice.

The company plans to release a version of the app for pediatricians, Wall says. This version would be populated with data from parents who complete the app’s questionnaire. Having this information would enable pediatricians to make a diagnosis “far faster than they are making their decisions today,” Wall says.

Other researchers are skeptical, however, and say they want to see more evidence of the tool’s effectiveness.

“Show us the data,” says Tony Charman, chair of clinical child psychology at King’s College London.

“None of [what] we have seen in the published literature has been reliable and valid and sensitive and specific,” says Matthew Goodwin, assistant professor of health sciences at Northeastern University in Boston. “Yet [Wall is] going to be taking money out of families’ pockets and out of insurance companies without a research base.” Goodwin was one of the researchers who was unable to replicate Wall’s 2012 study.

Even scientists closely affiliated with the company are circumspect.

“There’s been a paper or two with relatively small sample sizes that clearly don’t represent the diversity and complexity of the conditions that they are hoping that this kind of technology will assist with,” says John Constantino, a member of Cognoa’s scientific advisory board and professor of psychiatry and pediatrics at Washington University in St. Louis. “But I think there is promise in what they’re trying to do.”

Kanne says although the trial is supportive of Cognoa’s use as a screening tool, the app is not ready for diagnostic use.

“I’m so opposed to that,” Kanne says. “By calling it a ‘diagnostic measure,’ it might open the door to misuse use in the field.”

Wall says Cognoa has since improved the tool’s diagnostic abilities over the one Kanne used in his study.

The company’s leaders plan to submit the results to the FDA for the app’s approval. They also have “several manuscripts” that they intend to submit to peer-reviewed journals, according to Calderon, and hope to pitch the app to insurance companies, physicians and parents.

References
  • Wall D.P. et al. PLOS ONE 7, e43855 (2012) PubMed
  • Bone D. et al. J. Autism Dev. Disord. 45, 1121-1136 (2015) PubMed
  • Duda M. et al. J. Autism Dev. Disord. 46, 1953-1961 (2016) PubMed

Thursday, June 28, 2018

Statement on Separation of Children from Their Parents

From the Frank Porter Graham Child Development Institute
The University of North Carolina - Chapel Hill

June 21, 2018


The Frank Porter Graham Child Development Institute’s mission is “advancing knowledge, enhancing lives.”

In the spirit of that mission, FPG joins national organizations, including the Society for Research in Child Development (SRCD), the American Psychological Association, the American Academy of Pediatrics (AAP), the American Public Health Association (APHA), the American Educational Research Association (AERA), the American Psychiatric Association, and the National Association of Social Workers (NASW), in rejecting policies that separate families and create harmful conditions for child development.

FPG Research Scientist, Ximena Franco, Ph,D, is a co-author on the recently released research brief from the SRCD Latino Caucus: The Science is Clear: Separating Families has Long-term Damaging Psychological and Health Consequences for Children, Families, and Communities.

This document summarizes the research evidence on the harm caused by family separations, and has direct implications for understanding the far-reaching impact of the current zero-tolerance immigration policies and practices targeting families at the southern U.S. border.

Dr. Franco states, “While U.S. immigration has halted the practice of separating families at the border, it remains unclear if or when reunification will occur for families separated prior to implementation of the new policy.”

Exposure to severe or ongoing trauma, such as being separated from one’s parents and placed in an unfamiliar environment, may have long lasting effects on the lives of children, families, and communities.


Such trauma experiences can create “toxic stress” for children, which interferes with the development of brain circuits that are critical for emotional attachment and decision making, and creates challenges in children’s abilities to learn, to establish positive relationships with adults and peers, and to regulate their emotions and behavior.

Harming children’s development through policies that create such conditions is unacceptable.

Speaking out against policies and practices that compromise children and families’ well-being is our ethical obligation as research scientists, technical assistance and professional development providers, and implementation scientists.

Children and families emigrating to or living in the United States, regardless of their nationality and legal status, are not exempt from humane treatment.

FPG is committed to promoting social justice and racial equity for children and families, as demonstrated by our work and the long-term support of our Race, Culture, and Ethnicity (RACE) Committee. We must continue to use our voices and expertise to promote equitable and just programs, policies, and practices.