Society for the Teaching of Psychology
Division 2 of the American Psychological Association

GSTA Blog

Welcome to the GSTA blog! 

In an effort to keep the Graduate Student Teaching Association (GSTA) blog current, we regularly welcome submissions from graduate students as well as full-time faculty. Recently we have made the decision to expand and diversify the blog content to include submissions ranging from new research in the area of the Scholarship of Teaching and Learning (SoTL), public interest topics related to teaching and psychology, occasional book reviews, as well as continuing our traditional aim by including posts about teaching tips. The blog posts are typically short, ranging from about 500-1000 words, not including references. As it is an online medium, in-text hyperlinks, graphics, and even links to videos are strongly encouraged!

If you are interested in submitting a post, please email us at gsta.cuny@gmail.com. We are especially seeking submissions in one of the four topic areas:

  • Highlights of your current SoTL research
  • Issues related to teaching and psychology in the public interest
  • Reviews of recent books related to teaching and psychology
  • Teaching tips and best practices for today's classroom

We would especially like activities that align with APA 2.0 Guidelines!

This blog is intended to be a forum for graduate students and educators to share ideas and express their opinions about tried-and-true modern teaching practices and other currently relevant topics regarding graduate students’ teaching.

If you would like for any questions to be addressed, you can send them to gsta.cuny@gmail.com and we will post them as a comment on your behalf.

Thanks for checking us out,

The GSTA Blog Editorial Team:

Teresa Ober and Charles Raffaele


Follow us on twitter @gradsteachpsych or join our Facebook Group.


  • 05 Oct 2017 10:00 AM | Anonymous member (Administrator)

    By Karyna Pryiomka, Doctoral Candidate, The Graduate Center, CUNY


    In four years of teaching statistical methods in psychology, I have noticed that students often experience difficulty recognizing the relationship between a hypothetical construct, its operational definition, and the interpretation of results. This often leads to over-generalization, incorrect inferences, and other interpretive mistakes. Operational definitions of hypothetical constructs represent an important component of research in psychology. Operationally defining constructs and understanding the implications of these definitions for data interpretation then constitute key competencies for a psychology student. While operationalism is widely taught in research methods courses, its discussion in statistics courses is often reduced to a few paragraphs in an introductory chapter. To help my students better understand the collaborative, iterative, and context-bound process of creating appropriate operational definitions, I employ a low-stakes group activity during which students work in groups of 3 or 4 to create operational definitions of hypothetical constructs, such as confidence, in two distinct contexts: individual-level decision making and research design. The learning objective of the activity is to demonstrate the role of context in deciding how to operationalize a given construct and to illustrate the process of developing consensus about the meaning of constructs and their operational definitions.

    Here are the steps that I take to implement this activity:

    1.  I begin by assigning students into groups of 3 or 4, depending on class size. Ideally, at least 2 groups should work on the same problem. Each group receives only one variation of the problem. Below are examples of the two prompts. I give students about 15 minutes to work on the task in their groups.

    Individual-Level Decision-Making Context: A growing cat food company, Happy Kibble, is expanding its sales department and asks you, a group of industrial-organizational and personality psychologists, to use your expertise and help them hire the best sales people so that they can convince cat owners around the country to switch to Happy Kibble. You know from research that people who are confident often make good sales people. How would you define and operationalize confidence in this context in order to select a good employee? What questions would you ask candidates? What behavior would you pay attention to during an interview? Assume that the human resources office has pre-selected the candidates so they all qualify for the job based on the minimum education and professional experiences requirements.

    Research Context: A growing cat food company, Happy Kibble, has partnered with your research team to investigate if there is any relationship between the confidence level of a sales person and their professional success. Happy Kibble wants to conduct a real scientific study to answer this question. The company needs your expertise in defining and measuring confidence; however, you are on a tight budget so conducting individual interviews might not be an option if you want to collect a large enough sample to draw meaningful conclusions. How would you define and operationalize confidence in this context in order to be able to measure this trait in as many people as you can.


    2.  Once groups have created their definitions, a representative from each group is invited to write their definitions and measurement/assessment plan on the board.

    3.  I like to begin the discussion by emphasizing the differences between the two contexts. We then focus on establishing consensus among groups that worked on the same problem. We discuss the similarities and differences between the operational definitions produced by these groups, discuss the strengths and limitations of the proposed measurement/assessment plan, and reconcile any differences. We then compare the consensus definition produced for the interview context with the consensus definition produced for the research context. We outline key differences in contexts, discuss what type of evidence can be collected in each, and how the context influences the interpretation of data.
    For example, students in both contexts often mention eye contact as one of the behaviors representing confidence. We then discuss how they would measure/observe eye contact in the context of a job interview compared to a research study. Students in a job interview context point out that they would be direct qualitative judgments, made as they engage with the interviewees. Students in a research context often say that they would use video equipment to observe how sales representatives establish eye contact with their customers. In this context then, unlike their colleagues conducting job interviews, students are less likely to make direct qualitative judgments about individual people, but would rather observe, record, and quantify their behavior remotely.

    In my experience, students eagerly engage in the discussion, justifying their decisions and challenging those of others. They also begin to ask questions and think critically about the inferences that could be made based on the operational definitions they have proposed.  For instance, a group once suggested that a particular speech pattern or the use of specific words could constitute a variable to assess confidence. This suggestion led to a discussion of the relationship between language and existing standardized assessments like IQ or potential bias against non-native English speakers, making students question whether the proposed operational definition would fairly and accurately reflect someone’s confidence instead of another potentially related trait.

    Overall, I found this activity to be a great way to engage students in the discussion of important principles of research design, while promoting critical thinking about the role of operational definitions and measurement procedures in data collection and subsequent interpretation


  • 29 Sep 2017 5:00 PM | Anonymous member (Administrator)

    By Teresa Ober, The Graduate Center CUNY

    Dr. Jon E. Grahe is Professor of Psychology at the Pacific Lutheran University. His research interests include interpersonal perception, undergraduate research, and crowd-sourcing science. The GSTA editing team recently had a chance to talk with Dr. Grahe about his views on how innovations in undergraduate education can be used to address some of the current problems facing psychological science. In speaking with him, we learned quite a lot about Open Science, the Replication Crisis, and the Collaborative Replication and Education Project. Here are some of our burning questions about these topics and an edited summary of Dr. Grahe’s responses.

    Teresa Ober: Let’s start with a simple question. What exactly is “Open Science”?

    Jon Grahe: There are two levels here when we talk about “Open Science.” At one level, we might be referring to open-access resources, which is not my primary focus, but it refers to making sure everyone can read all publications. At another level, we are talking about transparency in the research process. Transparency in the research process can be manifested in at least three ways, including: 1) sharing hypotheses and analysis plans, 2) sharing information about the data collection procedures; and 3) sharing data and the results of the research process.

    There are certain tools available today that allow researchers to conduct open science according to the second level mentioned. Many of these tools are being developed by the people at the Center for Open Science. The Center for Open Science was formed by Brian Nosek and Jeffrey Spies to encourage more scientific transparency.  One of their products is the Open Science Framework, an electronic file cabinet with interactive featers that makes it easier for researchers to be transparent during the research process and serves as a hub where researchers can document, store, and share content related to the process of their research projects.

    TO: Why is Open Science so important?

    JG: When we learn about science as children, we are taught that replication and reproducibility is a big part of the scientific process. To achieve the possibility of replicating research, accurate documentation and transparency are necessary parts of the methods. Transparency is mainly what open science is about, and it is important because it allows us to test and retest our hypotheses. It is just fundamental for the scientific process of iterative hypothesis testing and theory development.

    TO: There has been discussion around the transparency of “Open Science” as a kind of revolution in the philosophy of science? What are your thoughts about this? Do you view it as a radical transformation, or a natural continuation given technological advancements or changed world views that make people more disposed towards sharing information in ways not previously possible?

    JG: The recent interest in openness in the scientific process has likely emerged due to the calls for the improved quality of science, which hit a critical juncture after the replication crisis. Transparency in science also became more feasible with advances in technology that allowed researchers to document and share research materials with relative ease. Before digital storage was cheap, it was very difficult to share such information.  Social networking platforms also encourage more distant connections and allow for better working relationships between people who never meet face to face. The digital age allow us to experience this revolution.

    TO: Tell us a little more about the “Replication Crisis.”

    JG: When we talk about the replication crisis, it is important that we recognize that it affects psychological science, but not exclusively. Though the field of psychology emerged as the center of attention for this issue, other scientific disciplines are likewise affected, and in some ways, the crisis of replication happened to affect psychology sooner.

    The Replication Crisis in psychology seemed to emerged around 2011 as a result of three events. The first event involved a set of serious accusations against a researcher who had reportedly fabricated data on multiple studies. The second issue was the publishing of findings that seemed outrageous and a misuse of proper statistical procedures. The third issue was a general swelling of the volume of research that had been shown to fail to replicate. In general, when many doctoral students and other researchers attempted to replicate published, and supposedly established, research findings, they were unable to do so. Since then, a lot of looking around has evolved in other fields, as well. These issues have led some researchers to speculate that as many as half of all published findings are false.

    TO: How are “Open Science” initiatives such as the Open Science Framework are attempting to address this issue?

    JG: By promoting transparency in the scientific process, replication becomes more feasible. In my own experience, I approached the replication crisis as a research methods instructor seeing a wasted resource in the practical training that nearly all undergraduate students must undertake. Before the crisis, my colleagues and I had been arguing for large-scale collaborate undergraduate research that was practical and involved efforts on the part of students to replicate research findings that had previously been published, see Grahe et al., (2012), see School Spirit Study Group (2004).

    TO: We’ve talked about how “Open Science” is good for research, but I am wondering if you could elaborate how such initiatives can be good preparing undergraduate and graduate students as researchers?

    JG:  Over 120,000 students graduate each year with a psychology degree, of whom approximately 90-95% must take a research methods class to fulfill their degree requirements. Of those, it is estimated that approximately 70-80% also complete a capstone or honors project and about 50% collect actual data to complete the project. Thus, there are tens of thousands of such projects that involve data collection each year in the U.S. alone. As a research methods instructor, I am concerned about making sure that my students have practical training that will help them professionally and allows them to learn about the research process more meaningfully. Further, by having class projects contribute to science, my work as an instructor was more clearly valued in tenure and promotion. In my classes, participating in “authentic” research projects is always a choice, and in my experience, many students embrace the chance to conduct an actual study and collect data and are also excited to receive training on conducting open science. 

    TO: This sounds very interesting. Tell us more about the Collaborative Replication and Education Project (CREP)?

    JG: CREP is actually the fourth project that I have undertaken to engage undergraduates in “authentic” research experiences  within a pedagogical context. The CREP is specifically geared towards replication, whereas  the earlier projects were oriented toward  getting students’ to contribute to science while learning to conduct research.

    As far as I know, the first-ever crowd-sources  study in psychology was published in a 2004 issue of the Teaching of Psychology (School Spirit Study Group, 2004; http://www.tandfonline.com/doi/abs/10.1207/s15328023top3101_5). That project leader found collaborators by invited them to measure school spirit at both an institutional level and an individual level. Students could use the individual data for their class papers, and the different types of units of analysis made for interesting classroom examples.

    The same year this was published, the same project leader, Alan Reifman invited us again to collectively administer a survey, this time it was about Emerging Adulthood and Politics (Reifman & Grahe, 2016). Because the primary hypothesis was not supported from about 2005 until about 2012, no one bothered to examine the data. However, when I was starting to focus on increasing participation in these projects, I saw this data set (over 300 variables from over 1300 respondents from 10 different locations) as a valuable demonstration of the project potential. We organized a special issue of the Emerging Adulthood Journal where nine different authors each answered a distinct research question using the data set. A follow up study called the EAMMi2 collected similar data from over 4000 respondents from researchers at 32 different locations. Both of these survey studies demonstrate that students can effectively help answer important research questions.

    Another undergraduate focused survey project that occurred just before CREP was launched Psi Chi collaborated with Psi Beta on their National Research Project (Grahe, Guillaume, & Rudmann, 2013). For this project, contributors administered the research protocol from David Funder’s International Situations Project to respondents in the United States.    

    In contrast to these projects, the CREP focuses on students completed experimental projects and students take greater responsibility for the project management. While I had one earlier attempt at this type of project, it didn’t garner much interest until the Replication Crisis occurred. At that point, there was greater interest from other individuals about the argument that students could help contribute to testing the reproducibility of psychological sciences.  Of note, one of the earliest contributors was a graduate student teaching research methods. As we have developed over the past 5 years and learned how to best manage the system, I’m now curious to see if there are potential partners in other sciences. There is nothing in the name that says psychology and the methods should generalize well to other disciplines

    TO: Why is the Logo for the CREP a bunch of Grapes?

    JG: The logo for CREP consists of a grape, which helps prime people to say the acronym as a rhyme for grapes, but is also a useful metaphor for replication studies in science. When you think of replications, you can think about a bunch of grapes. Even though each of the grapes consists of the same genetic material, there is some variation in the size and shape of each grape. Each grape in a bunch is like the results of a replication study. While grapes of the same genetics can differ in relative size, replications examining the same question will also vary in sample size yielded different sized confidence intervals. And replications can’t be exact, they are only close. So while grapes on the same vine may have slight differences in taste due to variability in ripeness, replication studies can have subtle differences in their conclusions, while striving to test the same underlying phenomenon. Replication studies can only be close never exact because of differences in participants or researchers conducting the study, , research contexts of time, location, slight variations in materials, and so forth. These differences can produce vastly different results even if effect is still there. Conducting a successful replication study doesn’t mean you’re guaranteed to find the same effect. And of course, there are varieties of grapes, just as there are varieties of replications. Close replications and conceptual replications are trying to address different questions just as different varieties have different flavors. The CREP has a Direct+ option where contributors can add their own questions to the procedure as long as it is after the original study or collected as additional conditions. This more advanced option provides different flavors of research for the CREP. There are many comparisons that make grapes a good metaphor for replication science, and I hope that the CREP continues to help students contribute to science while learning its methods.

    TO: If I can ask a follow-up question, then what could be considered a “successful replication”?

    JG: For researchers, a successful replication is one that, to the best of a researcher’s abilities, is as close to the methods of the original study. It is not about whether a finding comes out a certain way. When considering students, a successful replication study is further demonstrated when  the students demonstrates understanding of the hypothesis and why this study was designed to test that hypothesis. Can they reproduce the results correctly and can they interpret the findings appropriately. In other words, did they learn to be good scientists while generating useful data?

    TO: If you are working with students on a CREP replication study, do you allow them the freedom to choose a paper to replicate, or should instructors be a little more direct in this process?

    JG: The selection process for choosing replications is straightforward. We tend to select several highly cited articles each year, or about 36 studies total. We then code them for feasibility of undergraduate replication and selected those which were most feasible. We do this not based on the materials that are available, because often the researchers are willing to provide these, but rather to identify important studies that students can complete during a class.

    In my classes, students have complete choice on what studies they want to conduct, and often there are options beyond the CREP. However, I know others who provide students a list of studies that will be replicated or limit choice in other ways. There are many methods and the instructor should find a system they like the best.

    TO: How can graduate student instructors become more involved in the CREP initiative?

    JG: The CREP website gives instructions on what to do. In my chapter in the recent GSTA handbook, I talk about conditions for authentic research to be successful. If there is no IRB currently in place for conducting the research with undergraduates, then it simply cannot happen. The institution, department, and any supervising research staff need to be on board with it. When considering authentic research opportunities, it is always a good idea to talk to the department chair.

    For graduate students who would like to get involved with CREP, we are always looking for reviewers. The website contains some information about how to apply as a reviewer.

    Another thing that graduate student instructors can do is to take the CREP procedures and implement them into the course. The Open Science Framework is a great tool and even if an instructor cannot use CREP for whatever reason, they could try to use of the Open Science Framework to mimic the open science trajectory. Even if data never leave the class, there is information on the CREP website about workflows and procedures.

    TO: What sorts of protections are there for intellectual property under the Open Science Framework?  Can you briefly explain how the Creative Commons license protects the work of researchers who practice “Open Science”?

    JG: The Open Science Framework allows you to choose licenses for your work. In terms of further protections, the Open Science Framework itself doesn’t really provide protections on intellectual property, but rather the law itself does. If a research measure is licensed and published, there is still nothing that protects it except for the law. In any case, following APA guidelines and reporting research means that you are willing and interested in sharing what you do and your findings.

    TO: We see that you just recently came back from the “Crisis Schmeisis” Open Science Tour. Tell us how that went and about the music part.

    JG: Earlier this year, I agreed to conduct a workshop in southern Colorado.  Because I’m on sabbatical, I decided to drive instead of fly and then scheduled a series of stops throughout several states. These travels became the basis of the “Crisis Schemisis” tour (https://osf.io/zgrax). In total, there were 13 meetings, workshops, or talks at 7 different institutions. I had the chance to speak with provosts and deans, as well as students in research methods classes or at Psi Chi social events. During these visits, I showed how to use the Open Science Framework for courses or research, or gave talks presenting about the CREP or EAMMi2 project.  As demonstrations of ways to interface with the Open Science Framework.

    I somewhat jokingly called this the “Crisis Schmeisis” tour to help explain that even if someone doesn’t believe there is a replication crisis, the tools that emerged are beneficial and worthwhile to all. Throughout the year, I will continue the tour by extending existing trips to visit interested institutions.

    The Crisis Schmeisis tour almost looks like a musical tour, is that intentional?


    It is, I am also planning to write a series of songs about Scientific Transparency. Because it is an “Open Science Album, I’m putting the songs on the OSF (https://osf.io/y2hjc/). There is a rough track of the first song titled “Replication Crisis.” The lyrics of the song convey the basic issues of the crisis and I’m hoping that other Open Scientists will add their own tracks so that there is a crowd-sourced recording. I’m currently practicing “Go Forth and Replicate” and have a plan for a song about pre-registration. My goal is to complete 12 songs and to play them live at the Society for Improving Psychology Science conference next summer (http://improvingpsych.org/).

    TO: What happened in your career as a researcher or teacher that inspired you to become an advocate for the OSF?

    JG: During my first sabbatical, I was very unhappy with my place as a researcher and scholar. Did you know that the modal number of citations for all published manuscripts is exactly zero? That means that most published work is never cited, even once. As a researcher, I thought about my frustrations around working on something that would not matter, and as a teacher, I was concerned that students were not getting good practical training.

    At one point during my first sabbatical, I became frustrated in the process of revising a manuscript after receiving feedback from an editor. Instead of being angry about a manuscript that might never get cited anyway, I thought about areas where I was passionate and might be able to make a difference. I decided there was a better way to involve undergraduates in  science and that there were likely many research methods instructors like me who were also feeling underused and undervalued. After that point, my career changed directions. At the time, I was formulating these ideas, it was not about open science, per se, it was really about undergraduates making contribution and gaining experience from it.

    TO: Beyond replicability--what is the next crisis facing psychological science and how can we prepare?

    JG: I would like to see an interest in more expressive behaviors rather than key-strokes that typically define the research process. So much of the research that is conducted in a psychological lab is pretty far removed from daily interactions and I would like to see psychologists work harder to demonstrate meaningful effect sizes in authentic settings. The size of some of the effects we find in research are quite small, and it seems that we spend a lot of time talking about effect sizes that explain less than 3% of the variability in a given outcome variable.

    TO: Any final thoughts?

    JG: Just a note about the distinction between preregistration and preregistered reports. I think these often get confused in the Open Science discourse. Preregistration is the act of date stamping hypotheses and research plans. Preregistered Reports are a type of manuscript where the author submits an introduction, methods, and preregistered analysis plan. The editors make a decision to publish based on this information because the study is important regardless of the outcome of the results findings. There is also the possibility to write and submit an entire manuscript that has a preregistration as part of it. I see a lot of confusion about this topic.


    References

    Bhattacharjee, Y. (2013, April). The mind of a con man. The New York Times [Online]. Retrieved from http://www.nytimes.com/2013/04/28/magazine/diederik-stapels-audacious-academic-fraud.html

    Carey, B. (2011, January). Journal’s paper on ESP expected to prompt outrage. The New York Times [Online]. Retrieved from http://www.nytimes.com/2011/01/06/science/06esp.html

    Grahe, J. E. (2017). Authentic Research Projects Benefit Students, their Instructors, and Science. In R. Obeid, A. Schwartz, C. Shane-Simpson, & P. J. Brooks (Eds.) How We Teach Now: The GSTA Guide to Student-Centered Teaching, p. 352-368. Retrieved from the Society for the Teaching of Psychology web site: http://teachpsych.org/ebooks/

    Grahe, J. E., Reifman, A., Hermann, A. D., Walker, M., Oleson, K. C., Nario-Redmond, M., & Wiebe, R. P. (2012). Harnessing the undiscovered resource of student research projects. Perspectives on Psychological Science7(6), 605-607.

    Hauhart, R. C., & Grahe, J. E. (2010). The undergraduate capstone course in the social sciences: Results from a regional survey. Teaching Sociology38(1), 4-17.

    Hauhart, R. C., & Grahe, J. E. (2015). Designing and teaching undergraduate capstone courses. John Wiley & Sons.

    Ioannidis, J. P. (2005). Why most published research findings are false. PLoS medicine2(8), e124.

    Pashler, H., & Wagenmakers, E. J. (2012). Editors’ introduction to the special section on replicability in psychological science: A crisis of confidence?. Perspectives on Psychological Science7(6), 528-530.

    School Spirit Study Group. (2004). Measuring school spirit: A national teaching exercise. Teaching of Psychology31(1), 18-21.

  • 28 Sep 2017 5:00 PM | Anonymous member (Administrator)

    By Regan A. R. Gurung, University of Wisconsin-Green Bay

    There are many ways to learn. I like to think that armed with a curious mind and the right resources and motivation, anyone can learn by themselves. Of course, when we think of learning we don’t think of the solo pursuits of motivated individuals. We tend to think of schools and colleges. While master teachers can inspire with their passion and masterfully deliver content, most students rely heavily on course materials the faculty assign (though the students may not always read all of it) to solidify content acquisition. Consequently, the quality of course material is of tantamount importance. For years, faculty required students to buy textbooks. Students mostly bought them (and sometimes read them). Now there are a variety of free resources available. How do they compare to the expensive versions?  Are they all created equal?

     Once upon a time, you could rely on the simple heuristic that “pricey equals quality.” After all, standard textbooks (STBs) have the backing of major publishing companies who invest large sums of money to ensure quality products. The development editors, slew of peer reviewers examining every draft of every chapter, and focus groups should ensure a quality product. Then there are the bells and whistles.  STBs are packed with pictures, cartoons, and come with a wide array of textbook technology supplements (online quizzes, etc.; Gurung, 2015). Many believe that given a STB is put out by a publisher whose name is recognizable it must be good.  If an author who is familiar writes an STB, it must be good.  In fact, these are all empirical questions that are never really tested. The market research that big publishers cite and the student and faculty endorsements peppering the back covers and promotional materials of STBs rarely (if ever) represent true empirical comparisons of learning. To be fair, true comparisons of learning are difficult. A variety of factors- the student, the teacher, the textbook- all influence learning, which makes such research difficult.

    Are all STBs equal? In one study I did some years ago students rated a number of most adopted textbooks in the introductory psychology market (Gurung & Landrum, 2012). Students did differentiate between texts rating some books better than others but does the student preference matter? In a number of national studies, colleagues and I had students using different textbooks take a common quiz (ours) so we had a common measure of learning (Gurung, Daniel, & Landrum, 2012; Gurung, Landrum, & Daniel, 2012). Quiz scores did not vary.  Students seem to learn similarly from different textbooks regardless of the company. But now for the big question: Given that STBs are extremely expensive (and students complain) what about textbooks for free?

    Enter Open Educational Resources (OERs). OERs provide students and faculty with free electronic materials. For a great review of the growth of the OER movement see Jhangiani, and Biswas-Diener (2017). The OER movement sprouted from the creation of MERLOT by California State University in 1997. MERLOT provided access to curriculum materials for higher education, and Open Access and the Budapest Open Access Initiative further fueled the rise of the OER movement.  OER strode into the public consciousness when MIT, with funding from the Mellon and Hewlett foundations created OpenCourseWare, online courses designed to be shared for free. Are OERs better than STBs?

    The best studies using standardized or similar exams show no differences in exam scores between OER users and STB users. Sadly, the bulk of the studies available are fraught with limitations and validity issues. In an attempt to transcend the limitations of extant studies, I recently published a study (Gurung, 2017) comparing a group of OER users to STB users. In two large, multi-site studies, I compared students using OERs with students using STBs, and measured key student variables such as study techniques, time spent studying, ratings of the instructor, and rating of the quality and helpfulness of the textbook. All students completed a standardized test using a subset of items from a released Advanced Placement exam.

    In both studies, students using an OER scored lower on the test after controlling for ACT scores. Study 2 also compared book format (hard copy or electronic) and showed OER hard copy users scored lowest. Using books predicted significant variance in learning over and above ACT scores and students variables. Results provide insight into the utility of OERs and the limitations of current attempts to assess learning in psychology. On the upside, students using an OER rated the material as more applicable to their lives.

    When we talk about quality in higher education we tend to rely on the credibility of authors and the peer review process. While my findings urge caution in using OERs, it sheds light on how little learning outcome data there is for the use of STBs. Faculty still adopt these books, requiring students to pay thousands of dollars a year in textbook costs.

    Well-curated OERs, those where the writing and content is monitored and reviewed by peers and contributed by credible sources, deserve to likewise bask in the reflected glory of STBs. While OERs are ready for their time in the spotlight, scholars of teaching and learning need to work to assess true quality of all educational resources. OERs present the opportunity for every member of the public to learn for no cost. We all need to pay attention to what we can get for free but also to ensure materials are tested for effectiveness.


    References

    Gurung, R. A. R. (2015). Three investigations of the utility of textbook teaching supplements. Psychology of Learning and Teaching, 1, 48-59.

    Gurung, R. A. R. (2017). Predicting learning: Comparing an open education research and standard textbooks. Scholarship of Teaching and Learning, 3, 233-2498. http://dx.doi.org/10.1037/stl0000092

    Gurung, R. A. R., Daniel, D.B., & Landrum, R. E. (2012). A multi-site study of learning: A focus on metacognition and study behaviors. Teaching of Psychology, 39, 170-175. doi:10.1177/0098628312450428

    Gurung, R. A. R., & Landrum, R. E. (2012). Comparing student perceptions of textbooks: Does liking influence learning? International Journal of Teaching and Learning in Higher Education, 24, 144-150.

    Gurung, R. A. R., Landrum, R. E., & Daniel, D. B. (2012). Textbook use and learning: A North American perspective. Psychology of Learning and Teaching, 11, 87-98.

    Jhangiani, R. S., & Biswas-Diener, R. (Eds.). (2017). Open: The philosophy and practices that are revolutionizing education and science. Retrieved from http://dx.doi.org/10.5334/bbc

  • 28 Sep 2017 10:00 AM | Anonymous member (Administrator)

    By Jessica Murray, The Graduate Center CUNY

    The relentless forward march of technology can be overwhelming at times, for students and teachers alike. It doesn't help that some public universities can fall behind in keeping up with the latest technology because of limited financial resources, or choose proprietary tools which become familiar, only to replace them with cheaper options later on. The Futures Initiative started out a few years ago with a mission to reshape higher education. One of their key aims was to use network and communications tools to build community and foster greater access to technology. At the time, the CUNY Academic Commons (built in WordPress) was available only for graduate students, so the Futures Initiative created a new WordPress multisite, or network of sites, that was open for graduate students to develop course sites that they could use with their undergraduate students. As part of my role as a fellow for the Futures Initiative, I maintain this website and I teach people how to create their own site on our network. Many schools now host platforms like ours and the CUNY Academic Commons. If your school doesn't offer a place for you to create your own website, you can also create one on WordPress.org. This post offers a brief introduction to WordPress, but more importantly, encouragement, or what I'm calling my "pep-talking points." Hopefully by the time you finish reading, I will have convinced you to create your own course website on WordPress.

    WordPress is a free and open source content management system that has grown from being a small blogging platform in 2003 to being the most commonly used website creation platform in the world, accounting for more than 25% of all the websites on the entire internet. For those unfamiliar with the lingo, open source means that the core software, over 50,000 plugins that extend the core functionality, and thousands of themes which control the look and feel of WordPress sites, are developed by a community of programmers around the world. Content management system describes the very act of putting a website together (managing and displaying different types of content), and more importantly, is a tool designed so people without coding experience can create and edit the content of their site with a web browser. Before WordPress and other content management systems, we had to create static web pages in HTML, including placing text, images, and hyperlinks into the appropriate spots, styling the pages with CSS, uploading all of the files via FTP, and testing to see how it displayed on different browsers. Back then, if your web designer went on vacation, you may have had to wait for their return so they could fix a typo, but today, you can login to your site, fix the error and publish the changes in a few minutes without special software. This may be appreciated more by people who remember the old way of doing things (myself included), but it also demonstrates pep-talking point number one: technology is getting easier, not harder. Once you get started, you'll see how easy it really is.

    The Futures Initiative now hosts more than 50 course websites, some of which have more than 30 users, which illustrates pep-talking point number two: if hundreds of people at CUNY can create dozens of course websites in only a few years time, you can, too! Here at CUNY, some teachers have chosen to use WordPress instead of Blackboard because it can do all of the same things. One major benefit is that teachers have control over how they can use the site once their class is finished. Sharing documents securely, having a place for your syllabus, and creating discussion forums are some of the functions that can be replicated on WordPress. There are also some things that WordPress can do that Blackboard can't – a major one being, the opportunity for your students to write public posts. This is directly related to pep-talking point number three: creating content with WordPress is empowering! I have witnessed the undeniable look of satisfaction on the faces of many a workshop participant when they figure out how to add a header image to a page, publish their first test post, and see their changes happen in real-time. Once that happens, they're hooked. Making a website doesn't have to be daunting, and it won't be once you start creating your content. And, while you're doing it (pep-talking point number four): you and your students are learning valuable, marketable skills that will not only be a great addition to your CV, but also give you the tools to create your own online identity that won't cost you a dime. If I still haven't convinced you, and you don't know where to start, let me give you pep-talking point number five: start anywhere! WordPress has a pretty limited number of menu options. The key is to realize that you won't break anything that you can't fix, and the very best way to learn any software is to try things and see what happens. If you get stuck, Google your question and you'll find countless resources from a massive online community. There is a lot you can do with WordPress, but the most important thing is to publish that first post, bask in the glow of satisfaction that can only come from creating your own little sliver of the internet, and plan to inspire that confidence in your students by creating a shared course website in WordPress. 

  • 15 Sep 2017 10:00 AM | Anonymous member (Administrator)

    By Aaron S. Richmond, Ph.D., Metropolitan State University of Denver (Email: arichmo3@msudenver.edu, Twitter: @AaronSRichmond)

    Yes, it is that time of year—the first week of classes and start of a new semester. Many of us struggle with what to do. We could easily be that person that Gannon (2016) suggests starts the semester the absolute worst way—yes—it’s Syllabus Day! That is, most of our students expect this and have a script for the first day. It usually follows this formula: come to class, sit down, take roll, maybe do an ice-breaker that every other teacher does, get the syllabus, the teacher reads the syllabus, or if they are daring maybe puts it in a PowerPoint, engage in a brief discussion, then class is dismissed—hopefully early. I’m here to implore you—nay challenge you—to break this mold and do something different. There are so many different things that you can do to engage your students and to make a strong and positive first impression and I hope after reading this blog you will step out of your comfort zone and try a few of them.


    First Impressions are Important!

    As Legg and Wilson (2009) have demonstrated, first impressions even in an email, can lead to students having more positive beliefs about you as a person. So, what do you do to create a strong positive first impression? Legg and Wilson would suggest, prior to the start of the semester, sending out an email introducing yourself and the class. The email should be less formal, and more about you and who you are as a teacher. You can include the syllabus so that they can read it beforehand. Lyons et al. (2003) suggested that you should arrive early to class on first day and informally talk to students. At the same time, linger after class and answer any questions and talk to your students. Additionally, dress professionally and at the same time be comfortable and true to who you are (Gurung et al., 2014). If you can, change the physical environment of the classroom by rearranging desks to be more inclusive (e.g., circles or U shapes if possible as opposed to rows). Weimer (2015) suggests that you should discuss your commitment to teaching. Why do you do it? What do you love about it? To be a great teacher, what do you need from students (i.e., expectations)?  Lastly, share your story. It is important to humanize yourself and let your students know that you are just a person like anyone else. For example, on the first day of class, I show a picture of my three girls, lovely wife, dogs, bunnies, horses and all the other creatures on our farm. I do this to convey that I, like them, have a life outside the classroom. And that I will be flexible and respectful of the fact that sometimes life just happens to both me and the students. Remember, this first day can truly impact the tone and culture of the class for the rest of the semester so make it count.


    Be Active! Activities for the First Day

    The godmother of teaching and learning, Maryellen Wiemer (2017) suggests that being active on the first day can create a positive and productive climate for learning. She suggests that you do activities such as:

    • Best and worst classes: In this activity have students write on a piece of paper or on the board, what the best class they’ve had and what the teacher did to cause it to be the best class. Conversely, have student’s write about the worst class they’ve had and how the teacher caused it to be horrible. Then discuss and pledge to students how you will try to improve the course to become the best class J
    • First day graffiti: In this activity, place flip charts all around the room with different sentence stems. For instance, I learn best in classes where the teacher... Or Here’s something that makes it easy to learn in a course… Have students walk around the room and respond to each stem, discuss their answers with one another, then debrief as a class.
    • Syllabus Speed Dating: With syllabus in hand, have student sit across from one another and ask each other a question about the syllabus OR a question about themselves. Then, have students shift one seat down and ask another classmate one of the two questions.
    • Irritating Behaviors: Theirs and Ours: Put students into groups and have them list the five things that teachers do to make it easy to learn. Share their answers with the class. Then below the list, list the 5 things that you and your colleagues have found students do that make it difficult to teach. Discuss how teaching is a reciprocal process and what you will do to make this relationship productive, respectful, and enjoyable.

    Additionally, Lyons et al. (2003) suggested that there are several activities you can do in order to “whet students’ appetites for course content.”  For example, have students individually list what topics or concepts they think are associated with your textbook title. Then have them get with another student to share their ideas and categorize each idea into chapter or module-like units. Have the dyads or groups name their chapters, then have them arrange them as a table of contents. Then discuss these table of contents with your students. Often it helps you identify misconceptions about the course, but also provides an opportunity for students to actively engage in the content of the course and with one another. Another idea is to connect course content by bringing in current news that is related to your course content. For instance, DACA is a very relevant issue on our campus and I was teaching educational psychology. I related the social, emotional, and cognitive impacts of DACA with students in the K-higher education setting. Finally, Linda Nilson (2003) suggests that teachers should develop a “common sense inventory” that students complete to highlight common course content or common misconceptions (e.g., right vs. left-brained thinkers in educational psychology). The moral of this story is that instead of reading the syllabus, engage students in activities that demonstrate the course content, your pedagogical beliefs, and how you will engage students throughout the course.

     

    Set the Tone that will Last All Semester

    If you normally read from your notes and only lecture, then maybe you should just read the syllabus. However, if this isn’t how you teach, why do it? Instead, teach something that is not on the syllabus and in the manner in which you normally teach. That is, if you do a lot activities—do activities. If you use humor as a pedagogical tool—then try to be funny—no seriously! If you use experiential learning in your class, do it on the first day! If you use the Socratic method, have a discussion with your students about the course, what they will do, etc. If you use cooperative learning a lot in your course, model this by doing a jigsaw puzzle or a think-pair-share. The point is, you get one chance at a first impression, so make it count and make it accurate to who you are as a teacher. 


    Additionally, Lyons and colleagues (2003) suggested that there are specific things that you can do to set the tone. For instance, establish a culture of feedback. I discussed this earlier, but let students know you are very interested in how they are doing in the course and how you are doing teaching the class. Typically, this is done in anonymous fashion, but the point is to create a partnership of learning between you and your students. Although, some disagree with this, I suggest making “homework 0 a mandatory office visit”—meaning, give them some low-stakes incentive to come and meet with you in your office. In this meeting, don’t necessarily talk about the class. Rather, get to know your students.

     

    Moving Beyond the First Week

    Now that you’ve established a positive, engaged, and productive culture of learning during that first week, what do you do the second week? Joyce Povlacs Lunde (n.d.) has several—in fact 101--things you can do beyond the first week of class. Povlacs Lunde divides what you can do to keep students engaged beyond the first week into seven categories. First, try to help students transition into your class. This includes things like how much time they will need to study for the course, give sample test questions and answers, talk to different students each class period to learn a little about them. Second, she suggests that we should direct students’ attention to the class. For example, give low-stakes pretests to reward students for reading, ask students to write down what they think the important issues are. Third, challenge students. Have student write down their own expectations for the course and their goals for learning. Engage in problem-based learning. Fourth, provide support. You can do this by providing study guides, be redundant, use non-graded feedback, etc. Fifth, encourage active learning. Use think-pair-shares, ask a lot of questions and wait for their answers, use classroom assessment techniques such as muddy points to understand where students are struggling, etc. Sixth, build a community. This is one of my most important goals. Learn their names. I know this is difficult in big classes but what I do is I have students give me a 3 X 5 card with a picture on the back and their name, year in school, major, and something that they like to do for fun. I then study them like flashcards. I guarantee they will appreciate it and feel like they are part of something special. Seventh, get their feedback on your class. There are several ways to do this. You can ask them to provide anonymous feedback on how to improve lessons and assessments. You can give them inventories such as the Professor-Student Rapport Scale (Wilson et al., 2010) or the Student-Engagement Questionnaire (Handelsman et al., 2005), or the Learning Alliance Inventory (Rogers, 2015). You can then use results from these to improve your instruction.

    Never Stop Breaking the Mold!

    Ok, so when do you discuss the syllabus? As I’ve discussed previously, send the syllabus to them prior to the start of the semester. I promise, they can read, but if you don’t assess them on it, they won’t. So, give a syllabus quiz. My colleagues and I (2016) suggest that you should create a syllabus quiz that requires students to be the teacher and ask questions that students typically ask (e.g., Professor, can I turn in assignments late?) We also suggest that you revisit the syllabus often. This should not be a one-shot lesson. In fact, I have my students’ pull-out the syllabus at least once a week to check in what is due, reading for next week, etc.

    In the end, it is important to evolve and adapt your instructional practices to new students, new cultures, new and different courses. As such, you will develop some really great ways to break the mold based on what I discussed in the class, BUT you need to do more and will likely need to modify what you do next semester or quarter on the first day of class. I would like to leave you with a list of really good reads that further explain and provide more ways to change your script for that very important first day of class.


    References and Must Reads!

    Buirs, B. A. (2016, January 4th). First impressions: Activities for the first day of class. Faculty Focus: Higher Ed Teaching Strategies from Magna Publications. Retrieved from https://www.facultyfocus.com/articles/effective-teaching-strategies/first-impressions-activities-for-the-first-day-of-class/

    Gannon, K. (2016, August 3rd). The absolute worst way to start the semester. The Chronicle of Higher Education. Retrieved from https://chroniclevitae.com/news/1498-the-absolute-worst-way-to-start-the-semester

    Gurung, R. A., Kempen, L., Klemm, K., Senn, R., & Wysocki, R. (2014). Dressed to present: Ratings of classroom presentations vary with attire. Teaching of Psychology41, 349-353.

    Handelsman, M. M., Briggs, W. L., Sullivan, N., & Towler, A. (2005). A measure of college student course engagement. The Journal of Educational Research98(3), 184-192.

    Legg, A. M., & Wilson, J. H. (2009). E-mail from professor enhances student motivation and attitudes. Teaching of Psychology36(3), 205-211.

    Lyons, R., McIntosh, M., & Kysilka, M. (2003). Teaching college in an age of accountability. Boston: Allyn and Bacon.

    Morris, T., Gorham, J., Cohen, S., & Huffman, D. (1996). Fashion in the classroom: Effects of attire on student perceptions of instructors in college classes. Communication Education, 45, 135-148.

    Nilson, L. (2003). Teaching at its best: A research-based resource for college instructors (2nd ed.). Bolton, MA: Anker Publishing.

    Povlacs Lunde, J. (n.d.). 101 things you co in the first three weeks of class. Retrieved from http://www.unl.edu/gradstudies/current/teaching/first-3-weeks

    Provitera McGlynn, A. (2001.) Successful beginnings for college teaching: Engaging students from the first day. Madison, WI: Atwood Publishing.

    Raiscot, J. (1986). Silent sales. Minneapolis, MN: AB Publications.

    Richmond, A. S. Gurung, R. A. R., & Boysen, G. (2016).  An evidence-based guide to college and university teaching: Developing the model teacher. New York, NY: Routledge.

    Rogers, D. T. (2015). Further validation of the learning alliance inventory: The roles of working alliance, rapport, and immediacy in student learning. Teaching of Psychology42, 19-25.

    Weimer, M. (2013, August 13th) Five things to do on the first day of class. The Teaching Professor Blog. Retrieved from https://www.facultyfocus.com/articles/teaching-professor-blog/five-things-to-do-on-the-first-day-of-class/

    Weimer, M. (2015, August 9th). The first day of class: A once-a-semester opportunity.  Faculty Focus: Higher Ed Teaching Strategies from Magna Publications. Retrieved from https://www.facultyfocus.com/articles/teaching-professor-blog/the-first-day-of-class-a-once-a-semester-opportunity/

    Weimer, M. (2017, July 19th). The first day of class activities for creating a climate for learning. Faculty Focus: Higher Ed Teaching Strategies from Magna Publications. Retrieved from https://www.facultyfocus.com/articles/teaching-professor-blog/first-day-of-class-activities-that-create-a-climate-for-learning/

    Wilson, J. H., Ryan, R. G., & Pugh, J. L. (2010). Professor–student rapport scale predicts student outcomes. Teaching of Psychology37(4), 246-251.


  • 06 Sep 2017 12:00 PM | Anonymous member (Administrator)

    By Janie H. Wilson, Ph.D., Georgia Southern University

    I will begin by admitting that I started teaching 25 years ago as part of my graduate-school assistantship. At that time, I asked the department chair to avoid assigning me to teach statistics because I had seen many years of student struggles, including my own. I agreed to teach research methods, where I could share my passion for experimental and correlational designs. A few weeks into teaching the course, I realized my mistake. I explained to students that the two-group design we were using could be analyzed with a t-test. They stared blankly. I briefly explained that a t-test analyzed two groups when the dependent variable represented interval or ratio data. They glanced around the room at other students, clearly wondering if anyone knew what the heck I was talking about. One student raised her hand and assured me that she “kind of” remembered it.

    Based on the curriculum, I knew research methods had a prerequisite: statistics. How could they not remember a t-test? That term, I had to reteach the t-test, ANOVA, Pearson’s r, simple regression, and chi square. I did not do a good job of teaching the topics – I simply was not prepared to tackle detailed statistics in research methods.

    After the term ended, I gave a lot of thought to teaching statistics. Clearly I would have to teach analyses in research methods, so why not tackle the prerequisite course? To prepare, I looked back on the way I learned statistics. Mainly the focus had been on hand-calculations. Thinking about it now, I believe the approach made sense at the time. After all, when I began college, we typed term papers on a typewriter, not a PC! Computer labs popped up on campus pretty quickly, but even then, undergraduates did not learn statistical software as part of a statistics course. Later when I attended graduate school, hand-calculations remained the focus, including exams covering matrix algebra.

    Based on my undergraduate and graduate training, I prepped my statistics course with a heavy emphasis on hand-calculations. When I taught statistics for the first time, I spent a week helping students work through their math anxiety as best I could. On exams, I graded based on the process rather than the final answer because students usually made minor math errors along the way. Throughout the term, after students struggled through hand-calculations, I showed them the magic of statistical software. When the answer appeared in a matter of seconds, students often asked me why they had learned all of the math. My answer was always the same: If you know the math, you will understand the analysis better.

    As the years passed, I stood by my decision to focus on hand-calculations. Even when my (younger) colleagues urged me to consider focusing on computer software so I could spend class time on theory and more examples, I gave the same response: If they know the math, they will understand the analysis better.

    It turned out that my noble intentions had no substance. When my colleagues taught research methods, students who had taken my statistics course did not remember how to analyze data using – you guessed it – a t-test. The simplest analysis was lost in the fog of a summer or holiday break. I had done nothing to solve the problem of students forgetting statistics. In fact, I had to be honest with myself that I never had any evidence that my students understood analyses better after going through hand-calculations.

    I wish I could say my course immediately changed, but that would be untrue. I can say that my eyes had been opened, and I started watching what was really happening in my classroom. I would go through how to calculate standard deviation by hand, appreciating the “aha!” moment when students understood that we were obtaining an average spread of values. But rather than feel the elation of a job well done, I wondered what the point was. They were never going to work in a lab where they would calculate values by hand. In today’s world, most students have a powerful computer in their pockets.

    As I continued to take students through hand-calculations, I noticed that about 50% of my class time was used when students worked through examples. Sure, the activity kept them awake, but they often produced the wrong answer and became so bogged down in the math that the big picture was never clear to them. I began to ask them to put down their pencils and talk through an example with me. I explained that whether or not they remembered to take the square root of the final number was not the point; they needed to understand what the number meant for research. No matter what I said, they grabbed their pencils as soon as they could and dove into the problem again, determined to conquer the math.

    Although I have been slow to change the way I teach, the process has begun. And with so much class time free from hand-calculations, I can work with students on research design to provide context for each analysis. We have time to work through more examples, and I have even started incorporating APA style. I remain determined to help students build a solid foundation in our discipline with knowledge of statistics and research methods, the backbone of psychology as a science.

    I am open to change. My next goal is to fully integrate research methods and statistics. Even if the curriculum continues to offer statistics and methods as separate courses, I can integrate methods into statistics for context, and I can integrate statistics into the methods course for repetition and more complete examples. Integration enhances student retention of the information, and I am delighted that Psychology Departments are beginning to rethink the curriculum and abandon sequenced courses in favor of integration. By letting go of hand-calculations, we make room for the important context offered by research methods.


    Recommended Readings

    Barron, K. E., & Apple, K. J. (2016). Debating curricular strategies for teaching statistics and research methods: What does the current evidence suggest? Teaching of Psychology, 41(3), 187-194. DOI: 10.1177/0098628314537967

    Pliske, R. M., Caldwell, T. L., Calin-Jageman, R. J., & Taylor-Ritzler, T. (2016). Demonstrating the effectiveness of an integrated and intensive research methods and statistics course sequence. Teaching of Psychology, 42(2), 153-156. DOI: 10.1177/0098628315573139

    Stranahan, S. D. (1995). Sequence of research and statistics courses and student outcomes. Western Journal of Nursing Research, 17(6), 695-699.

    Wilson, J. H. (2017). Teaching challenging courses: Focus on statistics and research methods. In Obeid, R., Schwartz, A. M., Shane-Simpson, C., & Brooks, P. J. (Eds.), How we teach now: The GSTA guide to student-centered teaching. Society for the Teaching of Psychology e-book http://teachpsych.org/ebooks/howweteachnow

    Wilson, J. H., & Joye, S. W. (2017). Demonstrating interobserver reliability in naturalistic settings. In Stowell, J. R. & Addison, W. E. (Eds.), Activities for teaching research methods and statistics in psychology: A guide for instructors. Washington, DC: American Psychological Association.

    Wilson, J.H., & Joye, S.W. (2017). Research methods and statistics: An integrated approach. Thousand Oaks, CA: Sage Publications.

  • 06 Sep 2017 10:00 AM | Anonymous member (Administrator)

    By Jonathan E. Westfall, Ph.D., Delta State University

    The term “deliverable” is not one often heard in education, it being more at home in a project management context. Deliverables are tangible or intangible products that are delivered to customers. The closest thing we may have in education are “learning outcomes.” In a certain sense, a deliverable captures attention and sparks memory and association in ways that we don’t always consider. Over the past five years, I’ve attempted to use Open Educational Resources (OER) to create deliverables in my classroom, producing tangible products that my students can refer to long after our class has concluded. The goal in mind: provide something that keeps the content alive in some way. To do that, I’ll discuss three methods using OER.

    The Custom Textbook We Published

    David Wiley, from Lumen Learning, relates a story about the custom textbook. The idea is simply to take an OER textbook that allows derivative work (which is specified by using a Creative Commons license that does not specify “no derivatives”) and have students expand the work, customizing it for niche classes that otherwise would not have a specific text. Over a number of semesters, Wiley’s students have created such a book that becomes the book for the class, which students can download in PDF format.

    However a PDF can sometimes lack the “realness” and “concreteness” of a book. We hold books to be standards of information, and while the PDF is quickly becoming a similar standard, there is something fulfilling about holding a book or seeing a book in print. Several years ago, I challenged students in a Learning & Memory class to write a parenting manual based upon the learning concepts they’d just mastered (e.g., classical conditioning, operant conditioning, modeling, incidental learning, etc…). Students were given sheets of acid free paper and asked to illustrate their tip or suggestion. These were then scanned in and a PDF created which could be uploaded to a print-on-demand service. The result was “My Future Parenting Manual: Advice from Childless Me” (http://amzn.to/2vIn4Mt), a collection of work from the class that they could download for free, or order a print copy for a small fee. Indeed, today anyone can order it, as it has an ISBN number and is stocked at Amazon.com and others. An added bonus to this is that it can also be used as a fundraiser for a group or class, with profits going toward a group activity.

    The Class Slide Deck

    Students often struggle to remember what specifically they learned in each course. Therefore, one method I’ve used is to ask students to create a PowerPoint slide (or several slides) with the big ‘take aways’ from the semester. I’ve asked them to include their photo and name on the slide as well. I then assemble the slides together and we go over them in class. But most importantly, I make the slides available for download to the entire group. This allows students to have a tangible memory, in electronic format, from the semester. It also includes the people they took the class with, allowing them to tap into memories not just of material, but also student reactions. Coupling this activity with an OER resources (for example, material that could be modified/expanded upon, or freely distributed) and a self-publication service, can again create a tangible item that the students can keep. The modern “yearbook,” only specific to a course, department, or discipline.

    The Learning Tools We Build

    Statistics can be a difficult course because of the many concepts learned. These concepts tend to be best learned when integrated into examples and visualized. For many years we’ve been dependent on publishers to create such examples, or to build visualization engines that allow students to see how, for example, a distribution changes based upon established parameters.

    Today, however, a set of open-source software tools exists that can change that. R, the statistical language (http://www.r-project.org), provides sophisticated statistical operations to anyone at no cost. RStudio (http://www.rstudio.com) along with Shiny (http://shiny.rstudio.com) allow students and instructors alike to create immersive statistical examples. A classic example of this is the “Old Faithful” dataset app (http://shiny.rstudio.com/gallery/faithful.html) which allows students to change the size of the bins in a histogram to see how it affects the data visualization. The code that runs it is shown on the right. With practice, one can easily create these apps on their own. In my classes, I’ve used Shiny to produce analyses that would be too sophisticated for a student to run on their own, but not too sophisticated to interpret. Seeing their data come alive in a series of inferential tests or descriptive plots adds a level of realism to my statistics and upper-level seminar classes. Future plans include prompting students to write their own scripts and apps, to show off their research.

    Deliverables Revisited

    Through these examples, I hope that you’ve seen what I mean by the term “deliverables” in the classroom. By providing these physical or electronic products to students, we not only make information more memorable, but also enhance their skills and backgrounds. Remember that the student who helps build onto an open source textbook is not only your student, but also now an author. The student who uses R to analyze her data is not only going to do well in your statistics course, but also can now run complex calculations for her employer without the common complaint of “If I only had SPSS installed.” By working together to integrate OER and deliverables to our classes, we enrich our students, our institutions, and our disciplines.


  • 12 Aug 2017 4:00 PM | Anonymous member (Administrator)

    By Stephen L. Chew, Ph.D., Samford University

    Every beginning instructor discovers sooner or later that his first lectures were incomprehensible because he was talking to himself, so to say, mindful only of his point of view. He realizes only gradually and with difficulty that it is not easy to place one’s self in the shoes of students who do not yet know about the subject matter of the course.

    -Jean Piaget (1962)

    I came into the test really confident that I knew the material but it didn't show that on the test.

    -Student Email Message to Me


    This blog post is about egocentrism, on the part of both you the teacher and your students. Both teachers and students are subject to misunderstanding how well the students are comprehending and learning the course concepts. In teaching, we talk about metacognition, which is a more general term than egocentrism. Metacognition is a person’s awareness of their own thought processes. For the purpose of teaching, we can define metacognition as a people’s awareness of their level of understanding of a concept (Ehrliger & Shane, 2014). Students with good metacognition have an accurate understanding of how well they understand a concept. Student with poor metacognitive awareness lack a true grasp of how well they understand a concept. Typically, students with poor metacognition are grossly overconfident. They believe they have a deep, thorough understanding when their grasp is actually superficial and riddled with gaps. They fail to distinguish between popular beliefs they may have brought to the class with them and the empirically supported concepts presented in class. Egocentrism, then, is a form of poor metacognition.

    A form of egocentrism can affect instructors as well. Teachers often overestimate the level of understanding of the class. This is known as the curse of knowledge (Fisher & Kelli, 2016). As far as the teacher is concerned, he or she has explained concepts clearly, carefully, and completely. The teacher, however, no longer remembers how challenging it was to learn the concepts for the first time. Because students lack the expertise of the teacher, the teacher may have gone faster than the students could follow, or left out critical aspects of a concept because it seemed obvious to the teacher. To the teacher, there may have been only one possible interpretation of what he or she said, the correct one. To the students, however, without any prior knowledge of a concept, there may be multiple ways to interpret the class presentation through faulty assumptions or inferences.

    Most all veteran teachers have experienced following scenario. The teacher believes he or she has explained the material clearly and the students have understood it well. The students have attended class and studied the material on their own. On the exam, however, the students do poorly. The teacher is disappointed in the students, and may think, “Those lazy students. They must not have studied.” The students are also disappointed. They think, “That sneaky teacher. The test was full of obscure and tricky questions.” Each blames the other, but both teacher and students may be wrong. Neither may have had an accurate awareness of the students’ actual level of understanding.

    So how do we detect this egocentrism on the part of the teacher and student? We use formative assessments to gauge and promote student learning. Formative assessments are brief, low or no stakes assessments that are given before a high stakes exam (Angelo & Cross, 1993). They reveal the level of student understanding to both student and teacher. Formative assessments come in many varieties, such as think-pair-share, minute or muddiest point papers, or so called “clicker questions” (e.g. Angelo & Cross, 1993; Ritchart, Church, & Morrison, 2011; Barkley & Major, 2016).

    For example, say you are covering Piaget’s stages of cognitive development. After your presentation of all the stages, you can check on the class’s comprehension using a conceptest (Chew, 2004; Mazur, 2001). Present the class with the question below.

    Jean calls Papa Johns and orders a small pizza. “Do you want that cut into 6 slices or 8 slices?” asks the clerk. “Oh 6 slices,” says Jean, “I could never eat 8 slices.” Jean is showing

    1. Egocentrism
    2. Lack of object permanence
    3. Lack of conservation
    4. Assimilation

    Have everyone determine their answer silently. Then, on a signal, have everyone in the class raise their hand with the number of fingers indicating their answer. Both they and you can look around and gauge the frequency of different answers. Next have them discuss their answer with someone around them, preferably with someone who had a different response. After a few minutes of discussion, poll them using hand signals again. Then call on people with different answers and ask them to explain their reasoning. (I’d say the answer was #3.) Conceptests follow the specific procedure above (poll-discuss-poll-explain). Not only do conceptests give both teacher and students a sense of their level of understanding, they have been shown to be highly effective in promoting student learning, even when students get the answer wrong (Smith et al., 2009). You may recognize the question as a “clicker question” that you can use with a student response system, but the pedagogy ensures that students process and reflect on their answers. You can do conceptests with “clickers”, but often just a show of hands is faster, simpler, and just as effective.

    Here is an example of a Think-Pair-Share you could use.

    I was walking in a parking lot holding my 3-year old son over my shoulder. He was facing backwards and looking behind me. “Watch out, Dad,” he said, “There is a car behind you.” I was very impressed by this statement. Why?

    You can present the item to students and let them think about it, then pair off with another student and think about it, then share as a class. Once again, you can get a sense of the level of understanding of students. In this case the child realized his Dad couldn’t see the car behind him. Preoperational children are supposed to be egocentric.

    And that’s not all. Formative assessments are useful for achieving many desirable learning goals. Here is a list:

    • Improving metacognition for students and teachers
    • Addressing and countering tenacious student misconceptions
    • Illustrating the desired level of understanding of knowledge for students (especially in preparation for exams)
    • Promoting student learning and understanding through retrieval practice and peer learning
    • Promoting rapport and trust between teacher and student
    • Modeling critical thinking and problem solving

    Teachers who have never tried formative assessments often ask me: If the formative assessments are low stakes, why would students be motivated to do them. I can give several reasons. First, they are engaging and students find them fun to do. Second, they preview the kinds of questions and problems students will see on exams. Finally, the students recognize this is a learning opportunity that will help them master the material. Some teachers tell me that they have too much to cover to use formative assessments. What is the value of covering material if students don’t understand it? Formative assessments make learning visible. If you want to learn more about using formative assessments, check out my video series on the Cognitive Principles of Effective Teaching (http://bit.ly/1LDovLp).


    References

    Angelo, T. A., & Cross, K. P. (1993). Classroom assessment techniques: A handbook for college teachers (2nd ed.). San Francisco, CA: Jossey-Bass.

    Barkley, E. F., & Major, C. H. (2016). Learning Assessment Techniques: A Handbook for College Faculty, San Francisco: Jossey-Bass.

    Chew, S. L. (2004). Using ConcepTests for formative assessment. Psychology Teacher Network, 14(1), 10-12 http://www.apa.org/ed/precollege/ptn/2004/01/issue.pdf

    Crouch, C. H., & Mazur, E. (2001). American Journal of Physics, 69, 970-977. doi: 10.1119/1.1374249

    Ehrlinger, J., & Shane, E. A. (2014). How Accuracy in Students’ Self Perceptions Relates to Success in Learning. In V. A. Benassi, C. E. Overson, & C. M. Hakala (Eds.). Applying science of learning in education: Infusing psychological science into the curriculum. Retrieved from the Society for the Teaching of Psychology web site: http://teachpsych.org/ebooks/asle2014/index.php

    Fisher, M., & Keil, F. C. (2016). The curse of expertise: When more knowledge leads to miscalibrated explanatory insight. Cognitive Science, 40, 1251-1269. doi: 10.1111/cogs.12280

    Piaget, J. (1962). Comments on Vygotsky’s critical remarks concerning The Language and Thought of the Child and Judgment and Reasoning in the Child. Addendum in L. S. Vygotsky, Thought and Language. Cambridge, MA: MIT Press. Downloaded from https://www.marxists.org/archive/vygotsky/works/comment/piaget.htm

    Ritchart, R., Church, M., & Morrison K. (2011). Making thinking visible: How to promote engagement, understanding, and independence for all learners. San Francisco: Jossey-Bass.

    Smith, M. K., Wood, W. B., Adams, W. K., Wieman, C., Knight, J. K., Guild, N., & Su. T. T. (2009). Why peer discussion improves student performance on in-class concept questions. Science, 323, 122-124. doi: 10.1126/science.1165919

  • 10 Aug 2017 12:00 PM | Anonymous member (Administrator)

    By David B. Strohmetz, Ph.D. & Gary W. Lewandowski, Jr., Ph.D., Monmouth University

    “What can I do with a psychology degree?” “Even though I love psychology, should I major in something more practical?” “Will I be able to get a job?” Your students will inevitably ask you these questions. How will you respond?

    We often tell students that psychology prepares them for a wide range of career paths, but we can be vague as to what those paths may be. One solution is to give students career exploration resources (e.g., Appleby, 2016). But we argue that the better way to address students’ questions about their future is to emphasize that psychology helps them acquire skills that employers value. These include: communication skills; critical thinking and research skills; collaboration skills; self-management skills; professional skills; technological skills; and ethical skills (Appleby, 2014). The importance of skill development in the major is integral to the APA 2.0 Guidelines (APA, 2013), but have you asked yourself how you are intentionally helping your students build and strengthen these skills? With a few simple tweaks and/or modifications of existing assignments, you can help nurture the skills associated with postbaccalaureate success.

    Take, for example, communication – a critical skill for everyone. Do you focus primarily on having students write APA style papers? While this is clearly useful for communicating within the discipline, APA style papers have limitations outside of academia or when explaining psychological science to the general public. With this in mind, you could have an assignment where students write an op-ed piece or blog post sharing findings from a research article with a lay audience. Better yet, have them do this assignment several times where you increasingly restrict their word count (maybe reducing it from 1000 words to only 500 words). Besides enhancing their reading skills, students will strengthen their ability to write succinctly and clearly, an important (and rare) skill that will benefit them in any career.

    We should remember that writing is not the only way to communicate. Many students are petrified by the prospect of giving a presentation, yet public speaking is something that they will most certainly have to do at some point in their career. What is the best way to overcome this fear? Practice! You can accomplish this by having your students give multiple low-stakes presentations in your class. Remember that the goal is to have them learn how to give effective presentations. Don’t just say – “give a presentation” and sit back hope for the best. Take the time to discuss what makes an effective presentation. You could have students develop a top ten list of signs of a bad presentation and use this as a springboard for discussing how to give a quality presentation, including the effective use of PowerPoint. You may even pick up some pointers on how to improve your own presentation skills!

    It is hard to imagine any career that does not involve working collaboratively with others. Even professors do group work – we call them “committees!” While we do assign group work to our students, it is worth examining whether these assignments really help students learn how to collaborate effectively with others. We seem to take a “sink or swim” approach where we put students into groups and hope that collaborative learning takes place. Clearly, this is not an ideal way to promote skill development. The problem is that we don’t always take the time to teach students how to collaborate effectively nor do we always structure the assignment itself to promote such learning. Whenever you assign group work, take time to discuss strategies for dealing with group conflicts and working effectively together. You might also assign individuals to specific roles or responsibilities to fulfill within the group, reminiscent of Aronson’s Jigsaw Classroom technique (Aronson & Patnoe, 2011). There should be a level of accountability where other group members evaluate each student’s contributions, similar to what happens in the workplace with respect to performance evaluations. Having several small group projects where these roles or responsibilities rotate among group members will allow each student the opportunity to reinforce these skills. For example, rotate the role of project manager so that one student has the responsibility for overseeing all the other group members. Not only does this promote accountability, but also give students valuable leadership experience.

    One set of skills we often don’t think about cultivating in the classroom involves self-management, or the ability to manage time or stress. By completing small and large-scale assignments throughout the semester, students learn to balance multiple projects, similar to what they will be doing in the workplace. Again, don’t simply assign students this workload without also helping them to develop the skills necessary for success. Discuss strategies for how to manage the workload while maintaining a balance between their school and personal lives (a common challenge in academia!) Have students in groups discuss possible strategies and share them with the rest of the class, reinforcing their collaborative and presentation skills. Explain the value of breaking a large project into smaller tasks to make the project less overwhelming.

    As you intentionally incorporate skill development into your classes, remember that it is important that students recognize what you are doing and why. When on job or graduate school interviews, students should be able to describe learning experiences that illustrate the types of skills they developed as a psychology major. One strategy is to discuss the skills students will be developing both on the first day of class and when discussing individual course assignments. Periodically remind students the connections between what they are learning in their psychology classes and the skills that employers value in recent college graduates.

    APA 2.0 Guidelines are valuable for reminding us the types of skills that students can and should develop through the psychology major. It is up to us to intentionally help students develop those skills so that they no longer ask, “what can I do with a psychology degree?” but rather exclaim, “I have a psychology degree, let me tell you what I can do!”


    References

    American Psychological Association (2013). APA guidelines for the undergraduate psychology major: Version 2.0. Retrieved from http://www.apa.org/ed/precollege/about/psymajor-guidelines.pdf

    Appleby, D. C. (2014). A skills-based academic advising strategy for job-seeking psychology majors. In R. L. Miller & J. G. Irons (Eds.), Academic advising: A handbook for advisors and students Vol. 1: Models, Students, Topics, and Issues. Society for the Teaching of Psychology Retrieved from http://teachpsych.org/ebooks/academic-advising-2014-vol1

    Appleby, D. C. (2016). An online career-exploration resource for psychology majors. Society for the Teaching of Psychology. Retrieved from http://teachpsych.org/resources/Documents/otrp/resources/appleby16students.docx

    Aronson, E., & Patnoe, S. (2011). Cooperation in the classroom: The jigsaw method (3rd ed.). London, UK: Pinter & Martin.

  • 09 Aug 2017 5:00 PM | Anonymous member (Administrator)

    By Jeffery Scott Mio, Ph.D., California State Polytechnic University, Pomona

    The results of the 2016 Presidential Election were a surprise to many, particularly, one might argue, to organizations responsible for polling potential voters to get an accurate estimate of the outcome. While some might view this as a sign that polling is flawed, the issue may be taken up more specifically with how the samples for the polls were drawn rather than the method itself. The discussion that follows aims to elucidate several of the problems with the polling technique used to forecast the results of the 2016 election. This real-life example may serve as a useful demonstration to students about issues that may occur when proper sampling methods are not used, thus resulting in a non-representative sample.

    First of all, the polling agencies do not and probably cannot sample a representative sample.  They typically poll those who have landline, as opposed to cellular, telephone service.  This skews to older people, as many if not most young adults do not have landlines.  However, if they poll only those who have cell phones, this will skew to younger people, and older voters will be lost.  The same is true with Survey Monkey polls, as this will skew to younger voters because younger people are more comfortable with computers than older people.  This method also skews to more urban and suburban people and away from rural people.

    Second, not only are any of these methods questionable, there is also the problem of who answers the poll.  For example, I have a landline, but I never answer it unless it is from someone I know.  If it is a pollster, I will not answer it.  So what kind of people answer a pollster?  We don't know, and we don't know how representative these people are.  Secondarily, pollsters call multiple people at once, and when one person answers the phone, all of the other calls are dropped, so again, what kind of people are answering the poll, and how representative are they?

    Third, related to #2, pollsters admit that even if they talk to a live person, many do not answer the polls, so they end up getting only about 10% participation.  (By the way, the absolute minimum accepted percentage of respondents acceptable from a scientific perspective is 25%.)  Who are these people, and are they representative of the voting public?  Those who answer the polls may be good people and are answering honestly, but they simply are not necessarily representative of the voting population.  A "sample" is an estimate of the "population," but if the sample is skewed, we have an inaccurate picture of the population.  Therefore, when pollsters say that polls are a "snapshot," they may mistakenly be pointing the camera in a wrong direction.

    Fourth, the real question is, "How do we sample 'likely' voters when we do not know who is likely to vote?"  As it turned out, Trump actually did turn out many first-time voters or people who haven't voted for a long time.  On the other hand, Clinton did not excite enough of her voters, and especially because everyone thought that Clinton was going to win, many people did not show up, or younger generations of voters felt free to vote for third-party candidates.  If only a very small percentage of those who voted for third-party candidates had voted for Clinton (especially in Pennsylvania, Ohio, Michigan, and Wisconsin), Clinton would now be president.

    Finally, as polls indicated, the "undecided" vote was four times higher than in most other elections.  Most people read "undecided" and figure that they will break about 50-50, so Hillary's lead will remain the same in the final count.  However, history tells us that most undecided voters actually break in one direction.  In my estimation, most of the undecided voters were actually those who normally vote Republican, were reluctant to support Trump, but had a difficult time crossing party lines to vote for Clinton.  Their indecision was mostly, "Should I vote for Clinton, or should I vote for a third-party candidate (or should I not vote)?"  However, when the then-FBI Director, James Comey, announced an evaluation of a new batch of emails, I think that most of the undecideds said, "Oh, I can't deal with more Clinton scandals, so I will hold my nose and vote for my party."  Earlier estimates were that Clinton had over 90% of the Democrats, but Trump only had in the low 80% range of Republicans, but in the actual vote count, Trump had over 90%.  This tells me that the undecideds came home to the Republican Party.

    The bottom line is that polls are supposed to sample a population, and that sample is supposed to be representative of the population.  If you do not have a representative sample, your poll will necessarily be inaccurate.  Because some actual voters may have been more suspicious of polls, the media, and anything that smacks of tradition, they probably did not answer the polls in sufficient numbers, thus resulting in a biased sample.  This is why all of the polls seemed to support the notion that Clinton was going to win, which in fact did not happen.  One thing that is accepted by all social scientists is that any one poll may be wrong, but the aggregate of polls are accurate.  The problem with that line of thinking is that all of the analysts were blind to the fact that all of the polls were skewed in Clinton's direction, so of course, she would be systematically thought to be the winner.

Powered by Wild Apricot Membership Software