PsychoPy is an open-source application to allow the presentation of stimuli and collection of data for a wide range of neuroscience, psychology and psychophysics experiments. It’s a free, powerful alternative to Presentation™ or e-Prime™, written in Python (a free alternative to Matlab™ ).

News

The Python for Neuroscience workshop (bootcamp) will be running again 21-23rd April 2015 (sorry, it’s a beginners stream only this year). Book your place now at Python for Neuroscience workshop bookings

Latest version: 1.81.03 was released December 2014, with improved cross-version compatibility (including the ability for scripts to specify the version of the lib they should run on) and additional hardware support

See complete Changelog for complete list of additions and fixes

Contents

About PsychoPy

Overview

PsychoPy is an open-source package for running experiments in Python (a real and free alternative to Matlab). PsychoPy combines the graphical strengths of OpenGL with the easy Python syntax to give scientists a free and simple stimulus presentation and control package. It is used by many labs worldwide for psychophysics, cognitive neuroscience and experimental psychology.

Because it’s open source, you can download it and modify the package if you don’t like it. And if you make changes that others might use then please consider giving them back to the community via the mailing list. PsychoPy has been written and provided to you absolutely for free. For it to get better it needs as much input from everyone as possible.

Features

There are many advantages to using PsychoPy, but here are some of the key ones

  • Simple install process

  • Precise timing

  • Huge variety of stimuli (see screenshots) generated in real-time:
    • linear gratings, bitmaps constantly updating
    • radial gratings
    • random dots
    • movies (DivX, mov, mpg...)
    • text (unicode in any truetype font)
    • shapes
    • sounds (tones, numpy arrays, wav, ogg...)
  • Platform independent - run the same script on Win, OS X or Linux

  • Flexible stimulus units (degrees, cm, or pixels)

  • Coder interface for those that like to program

  • Builder interface for those that don’t

  • Input from keyboard, mouse, microphone or button boxes

  • Multi-monitor support

  • Automated monitor calibration (for supported photometers)

Hardware Integration

PsychoPy supports communication via serial ports, parallel ports and compiled drivers (dlls and dylibs), so it can talk to any hardware that your computer can! Interfaces are prebuilt for;
  • Spectrascan PR650, PR655, PR670
  • Minolta LS110, LS100
  • Cambridge Research Systems Bits++
  • Cedrus response boxes (RB7xx series)

System requirements

Although PsychoPy runs on a wide variety of hardware, and on Windows, OS X or Linux, it really does benefit from a decent graphics card. Get an ATI or nVidia card that supports OpenGL 2.0. Avoid built-in Intel graphics chips (e.g. GMA 950)

Testimonials - what do people think of PsychoPy?

OK, so we know that PsychoPy has quite a lot of users

We know that quite a few people have written manuscripts that cited PsychoPy

But did the users actually enjoy using the software or was it a painful experience? This page will hold the (roughly honest) opinions of users. If you’d like to add your own testimonial then go to this google form (Updating the testimonials will be done periodically so don’t expect your comment to appear here instantly, but we’ll try and remember to do it every now and then. Please don’t swear!!)


PsychoPy is one of those things that improve the life of an experimental psychologist. Really. #python #neuroscience

- Davide Massida, via twitter


It's wonderful to have a product that makes stimulus presentation easy (and is free!) while also providing Python as the underlying language so we can add the power we need.

We've used PsychoPy in several studies and we're super happy with it. Thanks, team!

- Nate Vack, Research Programmer, UW-Madison


The developers of PsychoPy have provided a valuable service for the scientific community. PsychoPy is a powerful presentation software package built on the foundation of one of the most ubiquitous open source programming languages in use to day. PsychoPy's builder interface makes it accessible to beginners, and the easy to use API makes it helpful to even the most advanced Python programmers when programming experiments. I say this as a former professional computer programmer. When I came to neuroscience, I spent a lot of time and consideration regarding which software packages I would use to program my experiments. I chose PsychoPy, and I have not regretted that decisions.

- Jared Roberts, PhD student, University of California, Irvine


What made me switch was the combination of Builder and Coder options. Students are not scared of the Builder, but I can still write code when needed (and even hide it in student experiments).

- Harriet Allen, Lecturer, University of Nottingham


PsychoPy is excellent. I came to it from having used Macromedia Director (Lingo) and PsyScope for my experiments before. Director was very powerful but also too big, and PsyScope -- as any 'experimental package' really -- too limiting. PsychoPy fit the bill: It is flexible, yet relatively easy to learn, and it is free, and cross-platform compatible.
What never ceases to amaze me, me, though, is the dedication of its developers to continually improve it, and the extremely helpful and fast responses I get to any queries.
I am mostly using it for my own research, and with my PhD students, but I have once also used it for a UG project. The student did not know the first thing about programming, but needed to use a BART. Fortunately, there was a BART demo with PsychoPy, and the student could very easily adapt this to her needs.
Thumbs up all around for PsychoPy!

- Marc Buehner, Reader, Cardiff University


PsychoPy is a fantastic tool for creating experiments. It combines the elegance and power of Python programming for experts with a graphical user interface for novices, that the PsychoPy team has put an enormous effort into developing. Because it is open source, its growth in popularity must have already resulted in many labs saving thousands of dollars in software licenses for alternatives they would have had to buy, like MATLAB. Because it is Python, which has emerged as the standard open-source language for interdisciplinary efforts in neuroscience, it is furthering closer integration of psychology with neuroscience. Because there is a helpful community of expert users on the forums, the PsychoPy community helps many new researchers get their start with experiment programming. For my department's postgrad students, I am currently planning a 3-day workshop taught mainly by Software Carpentry, a group of expert programmers, volunteers teaching for free (who are only interested in teaching open-source solutions) to skill up the next generation of scientists with advanced programming techniques that facilitate replicability, using R (which plays nicely with Python) and Python and PsychoPy. In any field, it is generally difficult to get much traction against established legacy solution with thousands of users ( in this area, including MATLAB), but PsychoPy has done so.

- Alex Holcombe, Associate Professor, University of Sydney


The great thing about PsychoPy isn't that it's free (though it is), or that the people who make it are very dedicated and helpful people (though they are), or that it can be extended to use a huge range of paradigms and hardware (though it can). The great thing is that PsychoPy is all done in Python.
I never took any CompSci or programming courses. I didn't grow up soldering together transistors. I just about know where the "on" switch is. And yet, I was able to put together an experiment that uses a keyboard, a touchscreen, an eyetracker, and a microphone for input. With other psychological experiment software, that required kludgey workarounds, advanced programming skills, and probably a punching bag next to your desk. With PsychoPy, with the demos that come with it, with all the diverse applications that get posted to the usergroup every day, it's just a bunch of lines of human-readable Python code.

- Daniel Bürkle, PhD student, University of Canterbury, Department of Linguistics


PsychoPy is great. I use it for almost all my experiments now. It transfers between Mac and PC, has a great GUI but is easy to customise with a bit of scripting, a helpful user community, can be explained to a student in an hour or so, is constantly updated by people who are active reserach scientists, is free and - well - actually, what more could you ask for?

- Fenja Ziegler, Senior Lecturer, University of Lincoln


I have used PsychoPy to present audio and visual stimuli for fMRI and behavioral experiments. PsychoPy made it very easy to put together presentation scripts quickly with their demos that cover most elemental usecases. Being free and independent of Matlab makes it possible to run my experiments anywhere without worrying about licensing issues.

- J. Swaroop Guntupalli, Postdoctoral Researcher, Dartmouth College


I learned PsychoPy when my advisor decided to move away from MATLAB due to cost. It was intuitive to learn, and I now love PsychoPy Coder for stimuli creation.

Its extra tools, such as the Monitor Center gui really make setting-up and calibrating less of a headache.

The thing I love most is that PsychoPy is one of the most portable platforms to code in; I can code on my PC laptop and then put it on a lab Mac and things almost always run without extra debugging. NOTHING that I've tried has such a good track record.

- Andy Silva, PhD student, UCLA


I have been using PsychoPy since 2005. There are many reasons why I use PsychoPy---a few of them: 1) The design decisions underlying PsychoPy's organisation, feature set, and implementation make it simple and powerful to implement experiments, which I believe stems in a large part from the developers and maintainers being active and working scientists. 2) PsychoPy has a supportive and responsive community of developers and users, leading to software that is actively maintained and is welcoming to new and experienced users. 3) PsychoPy's foundation in the Python programming language integrates it with the outstanding infrastructure and community of the Python ecosystem, allowing PsychoPy to be a pivotal link in a cohesive pipeline of software for carrying out, analysing, and publishing scientific work. 4) PsychoPy's open-source architecture allows for precise understanding of how the software operates, and its free license makes it much simpler to deploy PsychoPy in a variety of environments and be confident of its stability.

- Damien Mannion, Lecturer, UNSW Australia


I have migrated to PsychoPy for all of my experiments and get my students to create experiments using PsychoPy in my psycholinguistics classes. It is the fastest way to get an experiment up and running and despite being fast and easy is still capable of being extended endlessly to achieve pretty much any research goal you might have. I think it's the best experiment software available (at any price) and the fact that it's free is just the icing on the cake.

- Mark Scott, Prof, United Arab Emirates University


We just recently started to use PsychoPy for experiments in developmental psychology, but we have been very impressed so far by its capabilities, pace of development, and the responsiveness of its developers. We will definitely continue to use it for future studies.

- anon


Psychopy is a very useful tool. It combines a great experimental control with a gentle learning curve. Its multiplatform capabilities together with its open access approach makes it a great contribution, helping researchers in ways that few other software can. Its impact is more than remarkable. Currently we are changing all of our experimental tasks from other programs to Psychopy and we could not be happier with the results and process. I'd personally recomend Psychopy to any researcher and will use it both as teaching and research tool.

- Joaquín Morís, PhD, University of Barcelona


I intend to make the switch to psychopy from psychtoolbox over the next few years. PsychoPy is definitely the most mature package for conducting psychophysical experiments using Python. The function libraries are impressive and the community is growing. I have started using Python in favour of Matlab for other areas of my work. With PsychoPy's help I hope to be Matlab-free in a few years time!

- Thomas Wallis, Postdoctoral Fellow, University of Tübingen


PsychoPy represented a welcome and much needed reprieve for our lab when we heard of it a couple years ago. We used closed source alternatives prior to that point, and we were constantly frustrated by problems with site licenses and proprietary data formats. It's hard to describe how gratifying it was to code a fast, solid experimental paradigm completely in python and then have all our output files be csv's. PsychoPy has only gotten better since then, with the addition of more built-in functions and improvements to the Builder GUI functionality.

One project I helped create involved twenty or more subjects simultaneously performing a delay discounting decision making task while being paired with other subjects from one trial to the next based on religious group status. This was relatively easy to do with PsychoPy due to the flexibility of the software and the ease of including pure python code in the experiment. It would have been much more complicated with other software, if it would have been possible at all (not to mention dealing with program licenses for 20+ machines).

As a side note, the help forum for PsychoPy is one of the best I've been involved with. I've asked several questions, and frequently the developers of PsychoPy themselves have responded quickly and solved my problem. Prior to using PsychoPy, I struggled to find answers to my queries about closed source alternatives, and it was always a frustrating experience. Worse still, I didn't learn anything new, whereas my understanding of experimental methods generally and of python programming in particular has expanded by using PsychoPy. I also frequently see users answer questions posed by people new to the program to help them orient themselves. It's a friendly and helpful community, and unfortunately that's not always the case with this kind of thing!

- Andrew Poppe, PhD Student, University of Minnesota


Psychopy is an attempt to provide a usable psychological presentation suite for experiments developed by non-expert programmers. It has a great tutorial that walks new users through the interface and how to successfully create presentation paradigms. It's open-source and based on a programming platform that is skyrocketing in its adoption.

- Keith McGregor, Assistant Professor, Emory University


I programmed my first experiment using PsychoPy around February 2011. Since then I have basically only used PsychoPy for all the experiments I was responsible for and see no need to change this in the future.

PsychoPy came at the right time for me and (a) is comparatively easy to use because of relying on the great programming language Python, (b) offers a wealth of ever increasing functionality, and (c) hides exactly those technical issues behind a comfortable API that I am not interested in dealing with. Consequently, I can only recommend it to anyone who wants to use free software to program his or her experiments.

- Henrik Singmann, PostDoc, University of Freiburg


PsychoPy is the best software for experiment programming that I've seen. The Builder interface greatly eases the learning so that I can explain the basics for undergrad students in 5 minutes. And the power of Python makes the possibilities for power users virtually unlimited. Its free, fast-developing, and have a wonderful support community.

- Andrey Chetverikov, PhD student, researcher, St. Petersburg State University


I have been using psychopy since 2010 and it has been a great boon for my research. The design of the software library is elegant and intuitive, making it easy to get started, and easy to use, The software is powerful, enabling experiments in which complex stimuli are presented with accurate timing. The user and developer communities are helpful and welcoming to new-comers.

In addition to the effort that has gone into making installation of pyschopy easy and uniform, the implementation of an easy-to-use GUI has made collaborations with colleagues in other institutions a simple matter of sharing code, without needing to worry about complicated operation instructions ("just press the big green 'run' button"!), and without installation hassles.

The use of Python as a basis for the implementation of psychopy promises a bright future for the project and for its increased use in neuroscience and psychology experiments, as the language is rapidly becoming the lingua franca of reproducible computational data analysis in neuroscience and psychology.

- Ariel Rokem, Postdoc, Stanford University


Thank you very much Jonathan Peirce, we are starting to use psychopy instead of E-prime and it's really nice to work with your software. Having everything that we need in your software is amazing, we really appreciate your hard work, its impressive. You are doing lots of people work easier and faster.

Keep working like this, and keep it open source!!

- Dario, Student, Brain House Institute


I remember the days when there was a need to deal with made-up "scripting languages" to implement psychophysical experiments. I remember the waste of time learning these additional tools, I remember the pain associated with the "maintenance" of licenses, I remember the "community" of suffering souls that also could not get things done in reasonable time.

I am so glad that these times are over. Thanks to PsychoPy and thanks to its developers for being a team player in the larger eco-system that is scientific Python software. With PsychoPy I can do simple things in a simple way. At the same time, PsychoPy can channel the combined power of a mind-bending number of specialized Python packages available to be utilized in any experiment with a few lines of code -- in the very language I use to analyze and visualize acquired data.

You won't believe it, unless you try it.

- Michael Hanke, Prof, University of Magdeburg, Germany


PsychoPy is more powerful and more accessible than many proprietary experiment control software. It is one of the best choice.

- Attila Krajcsi, associate professor, Eötvös Loránd University

Screenshots

A few screenshots are provided here to give you a flavour, but it’s easier to download the software and run the demos (from the demos menus in each view) to see the variety of stimuli that can be generated.

PsychoPy is one of very few packages that allows a choice of interface. Use the Coder view, for those that like to program (or just use your own editor)

The Coder view

and the Builder view for those that don’t:

The Builder view

PsychoPy can handle every type of stimulus you can imagine...

Images and movies of most formats:

images

Random dots and element arrays, drawn in realtime:

Random dot kinematograms (RDKs) Complex arrays of elements drawn in realtime

Many text options and dialog boxes:

Lot's of options for drawing Unicode text stimuli It's really easy to build dialog boxes

For more ideas about PsychoPy’s massive range of stimuli, install it, go to the Coder view and run some of the demo scripts (there’s a whole demos menu).

Credits

Developers

PsychoPy was initially created and maintained by Jon Peirce but has many contributors to the code:

Jeremy Gray, Sol Simpson, Yaroslav Halchenko, Erik Kastman, Mike MacAskill, William Hogman, Jonas Lindeløv, Ariel Rokem, Dave Britton, Gary Strangman, C Luhmann, Hiroyuki Sogo

You can see details of contributions on Ohloh.net and there’s a visualisation of PsychoPy’s development history on youtube.

PsychoPy also stands on top of a large number of other developers’ work. It wouldn’t be possible to write this package without the preceding work of those that wrote the Dependencies

Support

Software projects aren’t just about code. A great deal of work is done by the community in terms of supporting each other. Jeremy Gray, Mike MacAskill, Jared Roberts and Jonas Lindelov particularly stand out in doing a fantastic job of answering other users’ questions. You can see the most active posters on the users list here: https://groups.google.com/forum/#!aboutgroup/psychopy-users

Funding

The PsychoPy project has attracted small grants from the HEA Psychology Network and Cambridge Research Systems . Thanks to those organisations for their support.

Jon is paid by The University of Nottingham (which allows him to spend time on this) and his grants from the BBSRC and Wellcome Trust have also helped the development PsychoPy.

Contributing to the project

PsychoPy is an open-source, community-driven project. It is written and provided free out of goodwill by people that make no money from it and have other jobs to do. The way that open-source projects work is that users contribute back some of their time.

Why make it free?

It has taken, literally, thousands of hours of programming to get PsychoPy where it is today and it is provided absolutely for free. Without someone working on it full time (which would mean charging you for it) the only way for the software to keep getting better is if people contribute back to the project.

Please, please, please make the effort to give a little back to this project. If you found the documentation hard to understand then think about how you would have preferred it to be written and contribute it.

How do I contribute changes?

For simple changes, and for users that aren’t so confident with things like version control systems then just send your changes to the mailing list.

If you want to make more substantial changes then it’s often good to discuss them first on the developers mailing list.

The ideal model, is to contribute via the repository on github. There is more information on that in the For Developers section of the documentation.

Contribute to the Forum (mailing list)

The easiest way to help the project is to write to the forum (mailing list) with suggestions and solutions.

For documentation suggestions please try to provide actual replacement text. You, as a user, are probably better placed to write this than the actual developers (they know too much to write good docs)!

If you’re having problems, e.g. you think you may have found a bug:
  • take a look at the Troubleshooting and Common Mistakes (aka Gotcha’s) first
  • submit a message with as much information as possible about your system and the problem
  • please try to be precise. Rather than say “It didn’t work” try to say what specific form of “not working” you found (did the stimulus not appear? or it appeared but poorly rendered? or the whole application crashed?!)
  • if there is an error message, try to provide it completely

If you had problems and worked out how to fix things, even if it turned out the problem was your own lack of understanding, please still contribute the information. Others are likely to have similar problems. Maybe the documentation could be clearer, or your email to the forum will be found by others googling for the same problem.

To make your message more useful you should, please try to:
  • provide info about your system and PsychoPy version (e.g. the output of the sysInfo demo in coder). A lot of problems are specific to a particular graphics card or platform
  • provide a minimal example of the breaking code (if you’re writing scripts)

Citing PsychoPy

If you use this software, please cite one of the papers that describe it.

  1. Peirce, JW (2007) PsychoPy - Psychophysics software in Python. J Neurosci Methods, 162(1-2):8-13
  2. Peirce JW (2009) Generating stimuli for neuroscience using PsychoPy. Front. Neuroinform. 2:10. doi:10.3389/neuro.11.010.2008

Citing these papers gives the reviewer/reader of your study information about how the system works, it also attributes some credit for its original creation, and it means provides a way to justify the continued development of the package.

Documentation

A pdf copy of the current documentation is available at:
http://www.psychopy.org/PsychoPyManual.pdf

Contents:

About PsychoPy

Overview

PsychoPy is an open-source package for running experiments in Python (a real and free alternative to Matlab). PsychoPy combines the graphical strengths of OpenGL with the easy Python syntax to give scientists a free and simple stimulus presentation and control package. It is used by many labs worldwide for psychophysics, cognitive neuroscience and experimental psychology.

Because it’s open source, you can download it and modify the package if you don’t like it. And if you make changes that others might use then please consider giving them back to the community via the mailing list. PsychoPy has been written and provided to you absolutely for free. For it to get better it needs as much input from everyone as possible.

Features

There are many advantages to using PsychoPy, but here are some of the key ones

  • Simple install process

  • Precise timing

  • Huge variety of stimuli (see screenshots) generated in real-time:
    • linear gratings, bitmaps constantly updating
    • radial gratings
    • random dots
    • movies (DivX, mov, mpg...)
    • text (unicode in any truetype font)
    • shapes
    • sounds (tones, numpy arrays, wav, ogg...)
  • Platform independent - run the same script on Win, OS X or Linux

  • Flexible stimulus units (degrees, cm, or pixels)

  • Coder interface for those that like to program

  • Builder interface for those that don’t

  • Input from keyboard, mouse, microphone or button boxes

  • Multi-monitor support

  • Automated monitor calibration (for supported photometers)

Hardware Integration
PsychoPy supports communication via serial ports, parallel ports and compiled drivers (dlls and dylibs), so it can talk to any hardware that your computer can! Interfaces are prebuilt for;
  • Spectrascan PR650, PR655, PR670
  • Minolta LS110, LS100
  • Cambridge Research Systems Bits++
  • Cedrus response boxes (RB7xx series)
System requirements

Although PsychoPy runs on a wide variety of hardware, and on Windows, OS X or Linux, it really does benefit from a decent graphics card. Get an ATI or nVidia card that supports OpenGL 2.0. Avoid built-in Intel graphics chips (e.g. GMA 950)

Testimonials - what do people think of PsychoPy?

OK, so we know that PsychoPy has quite a lot of users

We know that quite a few people have written manuscripts that cited PsychoPy

But did the users actually enjoy using the software or was it a painful experience? This page will hold the (roughly honest) opinions of users. If you’d like to add your own testimonial then go to this google form (Updating the testimonials will be done periodically so don’t expect your comment to appear here instantly, but we’ll try and remember to do it every now and then. Please don’t swear!!)


PsychoPy is one of those things that improve the life of an experimental psychologist. Really. #python #neuroscience

- Davide Massida, via twitter


It's wonderful to have a product that makes stimulus presentation easy (and is free!) while also providing Python as the underlying language so we can add the power we need.

We've used PsychoPy in several studies and we're super happy with it. Thanks, team!

- Nate Vack, Research Programmer, UW-Madison


The developers of PsychoPy have provided a valuable service for the scientific community. PsychoPy is a powerful presentation software package built on the foundation of one of the most ubiquitous open source programming languages in use to day. PsychoPy's builder interface makes it accessible to beginners, and the easy to use API makes it helpful to even the most advanced Python programmers when programming experiments. I say this as a former professional computer programmer. When I came to neuroscience, I spent a lot of time and consideration regarding which software packages I would use to program my experiments. I chose PsychoPy, and I have not regretted that decisions.

- Jared Roberts, PhD student, University of California, Irvine


What made me switch was the combination of Builder and Coder options. Students are not scared of the Builder, but I can still write code when needed (and even hide it in student experiments).

- Harriet Allen, Lecturer, University of Nottingham


PsychoPy is excellent. I came to it from having used Macromedia Director (Lingo) and PsyScope for my experiments before. Director was very powerful but also too big, and PsyScope -- as any 'experimental package' really -- too limiting. PsychoPy fit the bill: It is flexible, yet relatively easy to learn, and it is free, and cross-platform compatible.
What never ceases to amaze me, me, though, is the dedication of its developers to continually improve it, and the extremely helpful and fast responses I get to any queries.
I am mostly using it for my own research, and with my PhD students, but I have once also used it for a UG project. The student did not know the first thing about programming, but needed to use a BART. Fortunately, there was a BART demo with PsychoPy, and the student could very easily adapt this to her needs.
Thumbs up all around for PsychoPy!

- Marc Buehner, Reader, Cardiff University


PsychoPy is a fantastic tool for creating experiments. It combines the elegance and power of Python programming for experts with a graphical user interface for novices, that the PsychoPy team has put an enormous effort into developing. Because it is open source, its growth in popularity must have already resulted in many labs saving thousands of dollars in software licenses for alternatives they would have had to buy, like MATLAB. Because it is Python, which has emerged as the standard open-source language for interdisciplinary efforts in neuroscience, it is furthering closer integration of psychology with neuroscience. Because there is a helpful community of expert users on the forums, the PsychoPy community helps many new researchers get their start with experiment programming. For my department's postgrad students, I am currently planning a 3-day workshop taught mainly by Software Carpentry, a group of expert programmers, volunteers teaching for free (who are only interested in teaching open-source solutions) to skill up the next generation of scientists with advanced programming techniques that facilitate replicability, using R (which plays nicely with Python) and Python and PsychoPy. In any field, it is generally difficult to get much traction against established legacy solution with thousands of users ( in this area, including MATLAB), but PsychoPy has done so.

- Alex Holcombe, Associate Professor, University of Sydney


The great thing about PsychoPy isn't that it's free (though it is), or that the people who make it are very dedicated and helpful people (though they are), or that it can be extended to use a huge range of paradigms and hardware (though it can). The great thing is that PsychoPy is all done in Python.
I never took any CompSci or programming courses. I didn't grow up soldering together transistors. I just about know where the "on" switch is. And yet, I was able to put together an experiment that uses a keyboard, a touchscreen, an eyetracker, and a microphone for input. With other psychological experiment software, that required kludgey workarounds, advanced programming skills, and probably a punching bag next to your desk. With PsychoPy, with the demos that come with it, with all the diverse applications that get posted to the usergroup every day, it's just a bunch of lines of human-readable Python code.

- Daniel Bürkle, PhD student, University of Canterbury, Department of Linguistics


PsychoPy is great. I use it for almost all my experiments now. It transfers between Mac and PC, has a great GUI but is easy to customise with a bit of scripting, a helpful user community, can be explained to a student in an hour or so, is constantly updated by people who are active reserach scientists, is free and - well - actually, what more could you ask for?

- Fenja Ziegler, Senior Lecturer, University of Lincoln


I have used PsychoPy to present audio and visual stimuli for fMRI and behavioral experiments. PsychoPy made it very easy to put together presentation scripts quickly with their demos that cover most elemental usecases. Being free and independent of Matlab makes it possible to run my experiments anywhere without worrying about licensing issues.

- J. Swaroop Guntupalli, Postdoctoral Researcher, Dartmouth College


I learned PsychoPy when my advisor decided to move away from MATLAB due to cost. It was intuitive to learn, and I now love PsychoPy Coder for stimuli creation.

Its extra tools, such as the Monitor Center gui really make setting-up and calibrating less of a headache.

The thing I love most is that PsychoPy is one of the most portable platforms to code in; I can code on my PC laptop and then put it on a lab Mac and things almost always run without extra debugging. NOTHING that I've tried has such a good track record.

- Andy Silva, PhD student, UCLA


I have been using PsychoPy since 2005. There are many reasons why I use PsychoPy---a few of them: 1) The design decisions underlying PsychoPy's organisation, feature set, and implementation make it simple and powerful to implement experiments, which I believe stems in a large part from the developers and maintainers being active and working scientists. 2) PsychoPy has a supportive and responsive community of developers and users, leading to software that is actively maintained and is welcoming to new and experienced users. 3) PsychoPy's foundation in the Python programming language integrates it with the outstanding infrastructure and community of the Python ecosystem, allowing PsychoPy to be a pivotal link in a cohesive pipeline of software for carrying out, analysing, and publishing scientific work. 4) PsychoPy's open-source architecture allows for precise understanding of how the software operates, and its free license makes it much simpler to deploy PsychoPy in a variety of environments and be confident of its stability.

- Damien Mannion, Lecturer, UNSW Australia


I have migrated to PsychoPy for all of my experiments and get my students to create experiments using PsychoPy in my psycholinguistics classes. It is the fastest way to get an experiment up and running and despite being fast and easy is still capable of being extended endlessly to achieve pretty much any research goal you might have. I think it's the best experiment software available (at any price) and the fact that it's free is just the icing on the cake.

- Mark Scott, Prof, United Arab Emirates University


We just recently started to use PsychoPy for experiments in developmental psychology, but we have been very impressed so far by its capabilities, pace of development, and the responsiveness of its developers. We will definitely continue to use it for future studies.

- anon


Psychopy is a very useful tool. It combines a great experimental control with a gentle learning curve. Its multiplatform capabilities together with its open access approach makes it a great contribution, helping researchers in ways that few other software can. Its impact is more than remarkable. Currently we are changing all of our experimental tasks from other programs to Psychopy and we could not be happier with the results and process. I'd personally recomend Psychopy to any researcher and will use it both as teaching and research tool.

- Joaquín Morís, PhD, University of Barcelona


I intend to make the switch to psychopy from psychtoolbox over the next few years. PsychoPy is definitely the most mature package for conducting psychophysical experiments using Python. The function libraries are impressive and the community is growing. I have started using Python in favour of Matlab for other areas of my work. With PsychoPy's help I hope to be Matlab-free in a few years time!

- Thomas Wallis, Postdoctoral Fellow, University of Tübingen


PsychoPy represented a welcome and much needed reprieve for our lab when we heard of it a couple years ago. We used closed source alternatives prior to that point, and we were constantly frustrated by problems with site licenses and proprietary data formats. It's hard to describe how gratifying it was to code a fast, solid experimental paradigm completely in python and then have all our output files be csv's. PsychoPy has only gotten better since then, with the addition of more built-in functions and improvements to the Builder GUI functionality.

One project I helped create involved twenty or more subjects simultaneously performing a delay discounting decision making task while being paired with other subjects from one trial to the next based on religious group status. This was relatively easy to do with PsychoPy due to the flexibility of the software and the ease of including pure python code in the experiment. It would have been much more complicated with other software, if it would have been possible at all (not to mention dealing with program licenses for 20+ machines).

As a side note, the help forum for PsychoPy is one of the best I've been involved with. I've asked several questions, and frequently the developers of PsychoPy themselves have responded quickly and solved my problem. Prior to using PsychoPy, I struggled to find answers to my queries about closed source alternatives, and it was always a frustrating experience. Worse still, I didn't learn anything new, whereas my understanding of experimental methods generally and of python programming in particular has expanded by using PsychoPy. I also frequently see users answer questions posed by people new to the program to help them orient themselves. It's a friendly and helpful community, and unfortunately that's not always the case with this kind of thing!

- Andrew Poppe, PhD Student, University of Minnesota


Psychopy is an attempt to provide a usable psychological presentation suite for experiments developed by non-expert programmers. It has a great tutorial that walks new users through the interface and how to successfully create presentation paradigms. It's open-source and based on a programming platform that is skyrocketing in its adoption.

- Keith McGregor, Assistant Professor, Emory University


I programmed my first experiment using PsychoPy around February 2011. Since then I have basically only used PsychoPy for all the experiments I was responsible for and see no need to change this in the future.

PsychoPy came at the right time for me and (a) is comparatively easy to use because of relying on the great programming language Python, (b) offers a wealth of ever increasing functionality, and (c) hides exactly those technical issues behind a comfortable API that I am not interested in dealing with. Consequently, I can only recommend it to anyone who wants to use free software to program his or her experiments.

- Henrik Singmann, PostDoc, University of Freiburg


PsychoPy is the best software for experiment programming that I've seen. The Builder interface greatly eases the learning so that I can explain the basics for undergrad students in 5 minutes. And the power of Python makes the possibilities for power users virtually unlimited. Its free, fast-developing, and have a wonderful support community.

- Andrey Chetverikov, PhD student, researcher, St. Petersburg State University


I have been using psychopy since 2010 and it has been a great boon for my research. The design of the software library is elegant and intuitive, making it easy to get started, and easy to use, The software is powerful, enabling experiments in which complex stimuli are presented with accurate timing. The user and developer communities are helpful and welcoming to new-comers.

In addition to the effort that has gone into making installation of pyschopy easy and uniform, the implementation of an easy-to-use GUI has made collaborations with colleagues in other institutions a simple matter of sharing code, without needing to worry about complicated operation instructions ("just press the big green 'run' button"!), and without installation hassles.

The use of Python as a basis for the implementation of psychopy promises a bright future for the project and for its increased use in neuroscience and psychology experiments, as the language is rapidly becoming the lingua franca of reproducible computational data analysis in neuroscience and psychology.

- Ariel Rokem, Postdoc, Stanford University


Thank you very much Jonathan Peirce, we are starting to use psychopy instead of E-prime and it's really nice to work with your software. Having everything that we need in your software is amazing, we really appreciate your hard work, its impressive. You are doing lots of people work easier and faster.

Keep working like this, and keep it open source!!

- Dario, Student, Brain House Institute


I remember the days when there was a need to deal with made-up "scripting languages" to implement psychophysical experiments. I remember the waste of time learning these additional tools, I remember the pain associated with the "maintenance" of licenses, I remember the "community" of suffering souls that also could not get things done in reasonable time.

I am so glad that these times are over. Thanks to PsychoPy and thanks to its developers for being a team player in the larger eco-system that is scientific Python software. With PsychoPy I can do simple things in a simple way. At the same time, PsychoPy can channel the combined power of a mind-bending number of specialized Python packages available to be utilized in any experiment with a few lines of code -- in the very language I use to analyze and visualize acquired data.

You won't believe it, unless you try it.

- Michael Hanke, Prof, University of Magdeburg, Germany


PsychoPy is more powerful and more accessible than many proprietary experiment control software. It is one of the best choice.

- Attila Krajcsi, associate professor, Eötvös Loránd University

Screenshots

A few screenshots are provided here to give you a flavour, but it’s easier to download the software and run the demos (from the demos menus in each view) to see the variety of stimuli that can be generated.

PsychoPy is one of very few packages that allows a choice of interface. Use the Coder view, for those that like to program (or just use your own editor)

The Coder view

and the Builder view for those that don’t:

The Builder view

PsychoPy can handle every type of stimulus you can imagine...

Images and movies of most formats:

images

Random dots and element arrays, drawn in realtime:

Random dot kinematograms (RDKs) Complex arrays of elements drawn in realtime

Many text options and dialog boxes:

Lot's of options for drawing Unicode text stimuli It's really easy to build dialog boxes

For more ideas about PsychoPy’s massive range of stimuli, install it, go to the Coder view and run some of the demo scripts (there’s a whole demos menu).

Credits

Developers

PsychoPy was initially created and maintained by Jon Peirce but has many contributors to the code:

Jeremy Gray, Sol Simpson, Yaroslav Halchenko, Erik Kastman, Mike MacAskill, William Hogman, Jonas Lindeløv, Ariel Rokem, Dave Britton, Gary Strangman, C Luhmann, Hiroyuki Sogo

You can see details of contributions on Ohloh.net and there’s a visualisation of PsychoPy’s development history on youtube.

PsychoPy also stands on top of a large number of other developers’ work. It wouldn’t be possible to write this package without the preceding work of those that wrote the Dependencies

Support

Software projects aren’t just about code. A great deal of work is done by the community in terms of supporting each other. Jeremy Gray, Mike MacAskill, Jared Roberts and Jonas Lindelov particularly stand out in doing a fantastic job of answering other users’ questions. You can see the most active posters on the users list here: https://groups.google.com/forum/#!aboutgroup/psychopy-users

Funding

The PsychoPy project has attracted small grants from the HEA Psychology Network and Cambridge Research Systems . Thanks to those organisations for their support.

Jon is paid by The University of Nottingham (which allows him to spend time on this) and his grants from the BBSRC and Wellcome Trust have also helped the development PsychoPy.

Contributing to the project

PsychoPy is an open-source, community-driven project. It is written and provided free out of goodwill by people that make no money from it and have other jobs to do. The way that open-source projects work is that users contribute back some of their time.

Why make it free?

It has taken, literally, thousands of hours of programming to get PsychoPy where it is today and it is provided absolutely for free. Without someone working on it full time (which would mean charging you for it) the only way for the software to keep getting better is if people contribute back to the project.

Please, please, please make the effort to give a little back to this project. If you found the documentation hard to understand then think about how you would have preferred it to be written and contribute it.

How do I contribute changes?

For simple changes, and for users that aren’t so confident with things like version control systems then just send your changes to the mailing list.

If you want to make more substantial changes then it’s often good to discuss them first on the developers mailing list.

The ideal model, is to contribute via the repository on github. There is more information on that in the For Developers section of the documentation.

Contribute to the Forum (mailing list)

The easiest way to help the project is to write to the forum (mailing list) with suggestions and solutions.

For documentation suggestions please try to provide actual replacement text. You, as a user, are probably better placed to write this than the actual developers (they know too much to write good docs)!

If you’re having problems, e.g. you think you may have found a bug:
  • take a look at the Troubleshooting and Common Mistakes (aka Gotcha’s) first
  • submit a message with as much information as possible about your system and the problem
  • please try to be precise. Rather than say “It didn’t work” try to say what specific form of “not working” you found (did the stimulus not appear? or it appeared but poorly rendered? or the whole application crashed?!)
  • if there is an error message, try to provide it completely

If you had problems and worked out how to fix things, even if it turned out the problem was your own lack of understanding, please still contribute the information. Others are likely to have similar problems. Maybe the documentation could be clearer, or your email to the forum will be found by others googling for the same problem.

To make your message more useful you should, please try to:
  • provide info about your system and PsychoPy version (e.g. the output of the sysInfo demo in coder). A lot of problems are specific to a particular graphics card or platform
  • provide a minimal example of the breaking code (if you’re writing scripts)

Citing PsychoPy

If you use this software, please cite one of the papers that describe it.

  1. Peirce, JW (2007) PsychoPy - Psychophysics software in Python. J Neurosci Methods, 162(1-2):8-13
  2. Peirce JW (2009) Generating stimuli for neuroscience using PsychoPy. Front. Neuroinform. 2:10. doi:10.3389/neuro.11.010.2008

Citing these papers gives the reviewer/reader of your study information about how the system works, it also attributes some credit for its original creation, and it means provides a way to justify the continued development of the package.

General issues

These are issues that users should be aware of, whether they are using Builder or Coder views.

Monitor Center

PsychoPy provides a simple and intuitive way for you to calibrate your monitor and provide other information about it and then import that information into your experiment.

Information is inserted in the Monitor Center (Tools menu), which allows you to store information about multiple monitors and keep track of multiple calibrations for the same monitor.

For experiments written in the Builder view, you can then import this information by simply specifying the name of the monitor that you wish to use in the Experiment settings dialog. For experiments created as scripts you can retrieve the information when creating the Window by simply naming the monitor that you created in Monitor Center. e.g.:

from psychopy import visual
win = visual.Window([1024,768], mon='SonyG500')

Of course, the name of the monitor in the script needs to match perfectly the name given in the Monitor Center.

Real world units

One of the particular features of PsychoPy is that you can specify the size and location of stimuli in units that are independent of your particular setup, such as degrees of visual angle (see Units for the window and stimuli). In order for this to be possible you need to inform PsychoPy of some characteristics of your monitor. Your choice of units determines the information you need to provide:

Units Requires
‘norm’ (normalised to width/height) n/a
‘pix’ (pixels) Screen width in pixels
‘cm’ (centimeters on the screen) Screen width in pixels and screen width in cm
‘deg’ (degrees of visual angle) Screen width (pixels), screen width (cm) and distance (cm)
Calibrating your monitor

PsychoPy can also store and use information about the gamma correction required for your monitor. If you have a Spectrascan PR650 (other devices will hopefully be added) you can perform an automated calibration in which PsychoPy will measure the necessary gamma value to be applied to your monitor. Alternatively this can be added manually into the grid to the right of the Monitor Center. To run a calibration, connect the PR650 via the serial port and, immediately after turning it on press the Find PR650 button in the Monitor Center.

Note that, if you don’t have a photometer to hand then there is a method for determining the necessary gamma value psychophysically included in PsychoPy (see gammaMotionNull and gammaMotionAnalysis in the demos menu).

The two additional tables in the Calibration box of the Monitor Center provide conversion from DKL and LMS colour spaces to RGB.

Units for the window and stimuli

One of the key advantages of PsychoPy over many other experiment-building software packages is that stimuli can be described in a wide variety of real-world, device-independent units. In most other systems you provide the stimuli at a fixed size and location in pixels, or percentage of the screen, and then have to calculate how many cm or degrees of visual angle that was.

In PsychoPy, after providing information about your monitor, via the Monitor Center, you can simply specify your stimulus in the unit of your choice and allow PsychoPy to calculate the appropriate pixel size for you.

Your choice of unit depends on the circumstances. For conducting demos, the two normalised units (‘norm’ and ‘height’) are often handy because the stimulus scales naturally with the window size. For running an experiment it’s usually best to use something like ‘cm’ or ‘deg’ so that the stimulus is a fixed size irrespective of the monitor/window.

For all units, the centre of the screen is represented by coordinates (0,0), negative values mean down/left, positive values mean up/right.

Height units

With ‘height’ units everything is specified relative to the height of the window (note the window, not the screen). As a result, the dimensions of a screen with standard 4:3 aspect ratio will range (-0.6667,-0.5) in the bottom left to (+0.6667,+0.5) in the top right. For a standard widescreen (16:10 aspect ratio) the bottom left of the screen is (-0.8,-0.5) and top-right is (+0.8,+0.5). This type of unit can be useful in that it scales with window size, unlike Degrees of visual angle or Centimeters on screen, but stimuli remain square, unlike Normalised units units. Obviously it has the disadvantage that the location of the right and left edges of the screen have to be determined from a knowledge of the screen dimensions. (These can be determined at any point by the Window.size attribute.)

Spatial frequency: cycles per stimulus (so will scale with the size of the stimulus).

Requires : No monitor information

Normalised units

In normalised (‘norm’) units the window ranges in both x and y from -1 to +1. That is, the top right of the window has coordinates (1,1), the bottom left is (-1,-1). Note that, in this scheme, setting the height of the stimulus to be 1.0, will make it half the height of the window, not the full height (because the window has a total height of 1:-1 = 2!). Also note that specifying the width and height to be equal will not result in a square stimulus if your window is not square - the image will have the same aspect ratio as your window. e.g. on a 1024x768 window the size=(0.75,1) will be square.

Spatial frequency: cycles per stimulus (so will scale with the size of the stimulus).

Requires : No monitor information

Centimeters on screen

Set the size and location of the stimulus in centimeters on the screen.

Spatial frequency: cycles per cm

Requires : information about the screen width in cm and size in pixels

Assumes : pixels are square. Can be verified by drawing a stimulus with matching width and height and verifying that it is in fact square. For a CRT this can be controlled by setting the size of the viewable screen (settings on the monitor itself).

Degrees of visual angle

Use degrees of visual angle to set the size and location of the stimulus. This is, of course, dependent on the distance that the participant sits from the screen as well as the screen itself, so make sure that this is controlled, and remember to change the setting in Monitor Center if the viewing distance changes.

Spatial frequency: cycles per degree

Requires : information about the screen width in cm and pixels and the viewing distance in cm

There are actually three variants: ‘deg’, ‘degFlat’, and ‘degFlatPos’

‘deg’ : Most people using degrees of visual angle choose to make the assumption that a degree of visual angle spans the same number of pixels at all parts of the screen. This isn’t actually true for standard flat screens - a degree of visual angle at the edge of the screen spans more pixels because it is further from the eye. For moderate eccentricities the error is small (a 0.2% error in size calculation at 3 deg eccentricity) but grows as stimuli are placed further from the centre of the screen (a 2% error at 10 deg). For most studies this form of calculation is preferred, as it does not result in a warped appearance of visual stimuli, but if you need greater precision at far eccentricities then choose one of the alternatives below.

‘degFlatPos’ : This accounts for flat screens in calculating position coordinates of visual stimuli but leaves size and spatial frequency uncorrected. This means that an evenly spaced grid of visual stimuli will appear warped in position but will

‘degFlat’: This corrects the calculations of degrees for flatness of the screen for each vertex your your stimuli. Square stimuli in the periphery will, therefore, become more spaced apart but they will also get larger and rhomboid in the pixels that they occupy.

Pixels on screen

You can also specify the size and location of your stimulus in pixels. Obviously this has the disadvantage that sizes are specific to your monitor (because all monitors differ in pixel size).

Spatial frequency: `cycles per pixel` (this catches people out but is used to be in keeping with the other units. If using pixels as your units you probably want a spatial frequency in the range 0.2-0.001 (i.e. from 1 cycle every 5 pixels to one every 100 pixels).

Requires : information about the size of the screen (not window) in pixels, although this can often be deduce from the operating system if it has been set correctly there.

Assumes: nothing

Color spaces

The color of stimuli can be specified when creating a stimulus and when using setColor() in a variety of ways. There are three basic color spaces that PsychoPy can use, RGB, DKL and LMS but colors can also be specified by a name (e.g. ‘DarkSalmon’) or by a hexadecimal string (e.g. ‘#00FF00’).

examples:

stim = visual.GratingStim(win, color=[1,-1,-1], colorSpace='rgb') #will be red
stim.setColor('Firebrick')#one of the web/X11 color names
stim.setColor('#FFFAF0')#an off-white
stim.setColor([0,90,1], colorSpace='dkl')#modulate along S-cone axis in isoluminant plane
stim.setColor([1,0,0], colorSpace='lms')#modulate only on the L cone
stim.setColor([1,1,1], colorSpace='rgb')#all guns to max
stim.setColor([1,0,0])#this is ambiguous - you need to specify a color space
Colors by name

Any of the web/X11 color names can be used to specify a color. These are then converted into RGB space by PsychoPy.

These are not case sensitive, but should not include any spaces.

Colors by hex value

This is really just another way of specifying the r,g,b values of a color, where each gun’s value is given by two hexadecimal characters. For some examples see this chart. To use these in PsychoPy they should be formatted as a string, beginning with # and with no spaces. (NB on a British Mac keyboard the # key is hidden - you need to press Alt-3)

RGB color space

This is the simplest color space, in which colors are represented by a triplet of values that specify the red green and blue intensities. These three values each range between -1 and 1.

Examples:

  • [1,1,1] is white
  • [0,0,0] is grey
  • [-1,-1,-1] is black
  • [1.0,-1,-1] is red
  • [1.0,0.6,0.6] is pink

The reason that these colors are expressed ranging between 1 and -1 (rather than 0:1 or 0:255) is that many experiments, particularly in visual science where PsychoPy has its roots, express colors as deviations from a grey screen. Under that scheme a value of -1 is the maximum decrement from grey and +1 is the maximum increment above grey.

Note that PsychoPy will use your monitor calibration to linearize this for each gun. E.g., 0 will be halfway between the minimum luminance and maximum luminance for each gun, if your monitor gammaGrid is set correctly.

HSV color space

Another way to specify colors is in terms of their Hue, Saturation and ‘Value’ (HSV). For a description of the color space see the Wikipedia HSV entry. The Hue in this case is specified in degrees, the saturation ranging 0:1 and the ‘value’ also ranging 0:1.

Examples:

  • [0,1,1] is red
  • [0,0.5,1] is pink
  • [90,1,1] is cyan
  • [anything, 0, 1] is white
  • [anything, 0, 0.5] is grey
  • [anything, anything,0] is black

Note that colors specified in this space (like in RGB space) are not going to be the same another monitor; they are device-specific. They simply specify the intensity of the 3 primaries of your monitor, but these differ between monitors. As with the RGB space gamma correction is automatically applied if available.

DKL color space

To use DKL color space the monitor should be calibrated with an appropriate spectrophotometer, such as a PR650.

In the Derrington, Krauskopf and Lennie [1] color space (based on the Macleod and Boynton [2] chromaticity diagram) colors are represented in a 3-dimensional space using spherical coordinates that specify the elevation from the isoluminant plane, the azimuth (the hue) and the contrast (as a fraction of the maximal modulations along the cardinal axes of the space).

_images/dklSpace.png

In PsychoPy these values are specified in units of degrees for elevation and azimuth and as a float (ranging -1:1) for the contrast.

Note that not all colors that can be specified in DKL color space can be reproduced on a monitor. Here is a movie plotting in DKL space (showing cartesian coordinates, not spherical coordinates) the gamut of colors available on an example CRT monitor.

Examples:

  • [90,0,1] is white (maximum elevation aligns the color with the luminance axis)
  • [0,0,1] is an isoluminant stimulus, with azimuth 0 (S-axis)
  • [0,45,1] is an isoluminant stimulus,with an oblique azimuth
[1]Derrington, A.M., Krauskopf, J., & Lennie, P. (1984). Chromatic Mechanisms in Lateral Geniculate Nucleus of Macaque. Journal of Physiology, 357, 241-265.
[2]MacLeod, D. I. A. & Boynton, R. M. (1979). Chromaticity diagram showing cone excitation by stimuli of equal luminance. Journal of the Optical Society of America, 69(8), 1183-1186.
LMS color space

To use LMS color space the monitor should be calibrated with an appropriate spectrophotometer, such as a PR650.

In this color space you can specify the relative strength of stimulation desired for each cone independently, each with a value from -1:1. This is particularly useful for experiments that need to generate cone isolating stimuli (for which modulation is only affecting a single cone type).

Preferences

General settings
winType:
PsychoPy can use one of two ‘backends’ for creating windows and drawing; pygame and pyglet. Here you can set the default backend to be used.
units:
Default units for windows and visual stimuli (‘deg’, ‘norm’, ‘cm’, ‘pix’). See Units for the window and stimuli. Can be overridden by individual experiments.
fullscr:
Should windows be created full screen by default? Can be overridden by individual experiments.
allowGUI:
When the window is created, should the frame of the window and the mouse pointer be visible. If set to False then both will be hidden.
Application settings

These settings are common to all components of the application (Coder and Builder etc)

largeIcons:
Do you want large icons (on some versions of wx on OS X this has no effect)
defaultView:
Determines which view(s) open when the PsychoPy app starts up. Default is ‘last’, which fetches the same views as were open when PsychoPy last closed.
runScripts:
Don’t ask. ;-) Just leave this option as ‘process’ for now!
allowModuleImports (only used by win32):
Allow modules to be imported at startup for analysis by source assistant. This will cause startup to be slightly slower but will speedup the first analysis of a script.
Coder settings
outputFont:
a list of font names to be used in the output panel. The first found on the system will be used
fontSize (in pts):
an integer between 6 and 24 that specifies the size of fonts

codeFontSize = integer(6,24, default=12)

outputFontSize = integer(6,24, default=12)

showSourceAsst:
Do you want to show the source assistant panel (to the right of the Coder view)? On Windows this provides help about the current function if it can be found. On OS X the source assistant is of limited use and is disabled by default.
analysisLevel:
If using the source assistant, how much depth should PsychoPy try to analyse the current script? Lower values may reduce the amount of analysis performed and make the Coder view more responsive (particularly for files that import many modules and sub-modules).
analyseAuto:
If using the source assistant, should PsychoPy try to analyse the current script on every save/load of the file? The code can be analysed manually from the tools menu
showOutput:
Show the output panel in the Coder view. If shown all python output from the session will be output to this panel. Otherwise it will be directed to the original location (typically the terminal window that called PsychoPy application to open).
reloadPrevFiles:
Should PsychoPy fetch the files that you previously had open when it launches?
Builder settings
reloadPrevExp (default=False):
for the user to add custom components (comma-separated list)
componentsFolders:
a list of folder pathnames that can hold additional custom components for the Builder view
hiddenComponents:
a list of components to hide (e.g., because you never use them)
Connection settings
proxy:
The proxy server used to connect to the internet if needed. Must be of the form http://111.222.333.444:5555
autoProxy:
PsychoPy should try to deduce the proxy automatically (if this is True and autoProxy is successful then the above field should contain a valid proxy address).
allowUsageStats:
Allow PsychoPy to ping a website at when the application starts up. Please leave this set to True. The info sent is simply a string that gives the date, PsychoPy version and platform info. There is no cost to you: no data is sent that could identify you and PsychoPy will not be delayed in starting as a result. The aim is simple: if we can show that lots of people are using PsychoPy there is a greater chance of it being improved faster in the future.
checkForUpdates:
PsychoPy can (hopefully) automatically fetch and install updates. This will only work for minor updates and is still in a very experimental state (as of v1.51.00).
Key bindings

There are many shortcut keys that you can use in PsychoPy. For instance did you realise that you can indent or outdent a block of code with Ctrl-[ and Ctrl-] ?

Data outputs

There are a number of different forms of output that PsychoPy can generate, depending on the study and your preferred analysis software. Multiple file types can be output from a single experiment (e.g. Excel data file for a quick browse, Log file to check for error messages and PsychoPy data file (.psydat) for detailed analysis)

Log file

Log files are actually rather difficult to use for data analysis but provide a chronological record of everything that happened during your study. The level of content in them depends on you. See Logging data for further information.

PsychoPy data file (.psydat)

This is actually a TrialHandler or StairHandler object that has been saved to disk with the python cPickle module.

These files are designed to be used by experienced users with previous experience of python and, probably, matplotlib. The contents of the file can be explored with dir(), as any other python object.

These files are ideal for batch analysis with a python script and plotting via matplotlib. They contain more information than the Excel or csv data files, and can even be used to (re)create those files.

Of particular interest might be the attributes of the Handler:
extraInfo:the extraInfo dictionary provided to the Handler during its creation
trialList:the list of dictionaries provided to the Handler during its creation
data:a dictionary of 2D numpy arrays. Each entry in the dictionary represents a type of data (e.g. if you added ‘rt’ data during your experiment using ~psychopy.data.TrialHandler.addData then ‘rt’ will be a key). For each of those entries the 2D array represents the condition number and repeat number (remember that these start at 0 in python, unlike Matlab(TM) which starts at 1)

For example, to open a psydat file and examine some of its contents with:

from psychopy.misc import fromFile
datFile = fromFile('fileName.psydat')
#get info (added when the handler was created)
print datFile.extraInfo
#get data
print datFile.data
#get list of conditions
conditions = datFile.trialList
for condN, condition in enumerate(conditions):
    print condition, datFile.data['response'][condN], numpy.mean(datFile.data['response'][condN])

Ideally, we should provide a demo script here for fetching and plotting some data (feel free to contribute).

Long-wide data file

This form of data file is the default data output from Builder experiments as of v1.74.00. Rather than summarising data in a spreadsheet where one row represents all the data from a single condition (as in the summarised data format), in long-wide data files the data is not collapsed by condition, but written chronologically with one row representing one trial (hence it is typically longer than summarised data files). One column in this format is used for every single piece of information available in the experiment, even where that information might be considered redundant (hence the format is also ‘wide’).

Although these data files might not be quite as easy to read quickly by the experimenter, they are ideal for import and analysis under packages such as R, SPSS or Matlab.

Excel data file

Excel 2007 files (.xlsx) are a useful and flexible way to output data as a spreadsheet. The file format is open and supported by nearly all spreadsheet applications (including older versions of Excel and also OpenOffice). N.B. because .xlsx files are widely supported, the older Excel file format (.xls) is not likely to be supported by PsychoPy unless a user contributes the code to the project.

Data from PsychoPy are output as a table, with a header row. Each row represents one condition (trial type) as given to the TrialHandler. Each column represents a different type of data as given in the header. For some data, where there are multiple columns for a single entry in the header. This indicates multiple trials. For example, with a standard data file in which response time has been collected as ‘rt’ there will be a heading rt_raw with several columns, one for each trial that occurred for the various trial types, and also an rt_mean heading with just a single column giving the mean reaction time for each condition.

If you’re creating experiments by writing scripts then you can specify the sheet name as well as file name for Excel file outputs. This way you can store multiple sessions for a single subject (use the subject as the filename and a date-stamp as the sheetname) or a single file for multiple subjects (give the experiment name as the filename and the participant as the sheetname).

Builder experiments use the participant name as the file name and then create a sheet in the Excel file for each loop of the experiment. e.g. you could have a set of practice trials in a loop, followed by a set of main trials, and these would each receive their own sheet in the data file.

Delimited text files (.csv, .tsv, .txt)

For maximum compatibility, especially for legacy analysis software, you can choose to output your data as a delimited text file. Typically this would be comma-separated values (.csv file) or tab-delimited (.txt file). The format of those files is exactly the same as the Excel file, but is limited by the file format to a single sheet.

Gamma correcting a monitor

Monitors typically don’t have linear outputs; when you request luminance level of 127, it is not exactly half the luminance of value 254. For experiments that require the luminance values to be linear, a correction needs to be put in place for this nonlinearity which typically involves fitting a power law or gamma (\gamma) function to the monitor output values. This process is often referred to as gamma correction.

PsychoPy can help you perform gamma correction on your monitor, especially if you have one of the supported photometers/spectroradiometers.

There are various different equations with which to perform gamma correction. The simple equation (1) is assumed by most hardware manufacturers and gives a reasonable first approximation to a linear correction. The full gamma correction equation (3) is more general, and likely more accurate especially where the lowest luminance value of the monitor is bright, but also requires more information. It can only be used in labs that do have access to a photometer or similar device.

Simple gamma correction

The simple form of correction (as used by most hardware and software) is this:

(1)L(V) = a + kV^\gamma

where L is the final luminance value, V is the requested intensity (ranging 0 to 1), a, k and \gamma are constants for the monitor.

This equation assumes that the luminance where the monitor is set to ‘black’ (V=0) comes entirely from the surround and is therefore not subject to the same nonlinearity as the monitor. If the monitor itself contributes significantly to a then the function may not fit very well and the correction will be poor.

The advantage of this function is that the calibrating system (PsychoPy in this case) does not need to know anything more about the monitor than the gamma value itself (for each gun). For the full gamma equation (3), the system needs to know about several additional variables. The look-up table (LUT) values required to give a (roughly) linear luminance output can be generated by:

(2)LUT(V) = V^{1/\gamma}

where V is the entry in the LUT, between 0 (black) and 1 (white).

Full gamma correction

For very accurate gamma correction PsychoPy uses a more general form of the equation above, which can separate the contribution of the monitor and the background to the lowest luminance level:

(3)L(V) = a + (b+kV)^\gamma

This equation makes no assumption about the origin of the base luminance value, but requires that the system knows the values of b and k as well as \gamma.

The inverse values, required to build the LUT are found by:

(4)LUT(V) = \frac{( (1-V)b^\gamma + V(b+k)^\gamma )^{1/\gamma}-b}{k}

This is derived below, for the interested reader. ;-)

And the associated luminance values for each point in the LUT are given by:

L(V) = a + (1-V)b^\gamma + V(b+k)^\gamma

Deriving the inverse full equation

The difficulty with the full gamma equation (3) is that the presence of the b value complicates the issue of calculating the inverse values for the LUT. The simple inverse of (3) as a function of output luminance values is:

(5)LUT(L) = \frac{((L-a)^{1/\gamma} - b )}{k}

To use this equation we need to first calculate the linear set of luminance values, L, that we are able to produce the current monitor and lighting conditions and then deduce the LUT value needed to generate that luminance value.

We need to insert into the LUT the values between 0 and 1 (to use the maximum range) that map onto the linear range from the minimum, m, to the maximum M possible luminance. From the parameters in (3) it is clear that:

(6)m = a+b^\gamma

M = a+(b+k)^\gamma

Thus, the luminance value, L at any given point in the LUT, V, is given by

(7)L(V) &= m + (M-m)V \\
     &= a+b^\gamma + (a+(b+k)^\gamma - a - b^\gamma)V \\
     &= a + b^\gamma + ((b+k)^\gamma - b^\gamma)V \\
     &= a + (1-V)b^\gamma + V(b+k)^\gamma

where V is the position in the LUT as a fraction.

Now, to generate the LUT as needed we simply take the inverse of (3):

(8)LUT(L) = \frac{(L-a)^{1/\gamma}-b}{k}

and substitute our L(V) values from (7):

(9)LUT(V) &= \frac{( a + (1-V)b^\gamma + V(b+k)^\gamma -a)^{1/\gamma}-b}{k}\\
    &= \frac{( (1-V)b^\gamma + V(b+k)^\gamma )^{1/\gamma}-b}{k}

References
[2]Pelli, D. G., & Zhang, L. (1991) Accurate control of contrast on microcomputer displays. Vision Research 31, 1337-1350.

OpenGL and Rendering

All rendering performed by PsychoPy uses hardware-accelerated OpenGL rendering where possible. This means that, as much as possible, the necessary processing to calculate pixel values is performed by the graphics card GPU rather than by the CPU. For example, when an image is rotated the calculations to determine what pixel values should result, and any interpolation that is needed, are determined by the graphics card automatically.

In the double-buffered system, stimuli are initially drawn into a piece of memory on the graphics card called the ‘back buffer’, while the screen presents the ‘front buffer’. The back buffer initially starts blank (all pixels are set to the window’s defined color) and as stimuli are ‘rendered’ they are gradually added to this back buffer. The way in which stimuli are combined according to transparency rules is determined by the blend mode of the window. At some point in time, when we have rendered to this buffer all the objects that we wish to be presented, the buffers are ‘flipped’ such that the stimuli we have been drawing are presented simultaneously. The monitor updates at a very precise fixed rate and the flipping of the window will be synchronised to this monitor update if possible (see Sync to VBL and wait for VBL).

Each update of the window is referred to as a ‘frame’ and this ultimately determines the temporal resolution with which stimuli can be presented (you cannot present your stimulus for any duration other than a multiple of the frame duration). In addition to synchronising flips to the frame refresh rate, PsychoPy can optionally go a further step of not allowing the code to continue until a screen flip has occurred on the screen, which is useful in ascertaining exactly when the frame refresh occurred (and, thus, when your stimulus actually appeared to the subject). These timestamps are very precise on most computers. For further information about synchronising and waiting for the refresh see Sync to VBL and wait for VBL.

If the code/processing required to render all you stimuli to the screen takes longer to complete than one screen refresh then you will ‘drop/skip a frame’. In this case the previous frame will be left on screen for a further frame period and the flip will only take effect on the following screen update. As a result, time-consuming operations such as disk accesses or execution of many lines of code, should be avoided while stimuli are being dynamically updated (if you care about the precise timing of your stimuli). For further information see the sections on Detecting dropped frames and reducingDroppedFrames.

Fast and slow functions

The fact that modern graphics processors are extremely powerful; they can carry out a great deal of processing from a very small number of commands. Consider, for instance, the PsychoPy Coder demo elementArrayStim in which several hundred Gabor patches are updated frame by frame. The graphics card has to blend a sinusoidal grating with a grey background, using a Gaussian profile, several hundred times each at a different orientation and location and it does this in less than one screen refresh on a good graphics card.

There are three things that are relatively slow and should be avoided at critical points in time (e.g. when rendering a dynamic or brief stimulus). These are a) disk accesses, b) passing large amounts of data to the graphics card, and c) making large numbers of python calls.

Functions that are very fast:

  1. Calls that move, resize, rotate your stimuli are likely to carry almost no overhead

  2. Calls that alter the color, contrast or opacity of your stimulus will also have no

    overhead IF your graphics card supports OpenGL Shaders

  3. Updating of stimulus parameters for psychopy.visual.ElementArrayStim is

    also surprisingly fast BUT you should try to update your stimuli using numpy arrays for the maths rather than for... loops

Notable slow functions in PsychoPy calls:

  1. Calls to set the image or set the mask of a stimulus. This involves having to

    transfer large amounts of data between the computer’s main processor and the graphics card, which is a relatively time-consuming process.

  2. Any of your own code that uses a Python for... loop is likely to be slow if you

    have a large number of cycles through the loop. Try to ‘vectorise’ your code using a numpy array instead.

Tips to render stimuli faster
  1. Keep images as small as possible. This is meant in terms of number of pixels,

    not in terms of Mb on your disk. Reducing the size of the image on your disk might have been achieved by image compression such as using jpeg images but these introduce artefacts and do nothing to reduce the problem of send large amounts of data from the CPU to the graphics card. Keep in mind the size that the image will appear on your monitor and how many pixels it will occupy there. If you took your photo using a 10 megapixel camera that means the image is represented by 30 million numbers (a red, green and blue) but your computer monitor will have, at most, around 2 megapixels (1960x1080).

  2. Try to use square powers of two for your image sizes. This is efficient because

    computer memory is organised according to powers of two (did you notice how often numbers like 128, 512, 1024 seem to come up when you buy your computer?). Also several mathematical routines (anything involving Fourier maths, which is used a lot in graphics processing) are faster with power-of-two sequences. For the psychopy.visual.GratingStim a texture/mask of this size is required and if you don’t provide one then your texture will be ‘upsampled’ to the next larger square-power-of-2, so you can save this interpolation step by providing it in the right shape initially.

  3. Get a faster graphics card. Upgrading to a more recent card will cost around £30.

    If you’re currently using an integrated Intel graphics chip then almost any graphics card will be an advantage. Try to get an nVidia or an ATI Radeon card.

OpenGL Shaders

You may have heard mention of ‘shaders’ on the users mailing list and wondered what that meant (or maybe you didn’t wonder at all and just went for a donut!). OpenGL shader programs allow modern graphics cards to make changes to things during the rendering process (i.e. while the image is being drawn). To use this you need a graphics card that supports OpenGL 2.1 and PsychoPy will only make use of shaders if a specific OpenGL extension that allows floating point textures is also supported. Nowadays nearly all graphics cards support these features - even Intel chips from Intel!

One example of how such shaders are used is the way that PsychoPy colors greyscale images. If you provide a greyscale image as a 128x128 pixel texture and set its color to be red then, without shaders, PsychoPy needs to create a texture that contains the 3x128x128 values where each of the 3 planes is scaled according to the RGB values you require. If you change the color of the stimulus a new texture has to be generated with the new weightings for the 3 planes. However, with a shader program, that final step of scaling the texture value according to the appropriate RGB value can be done by the graphics card. That means we can upload just the 128x128 texture (taking 1/3 as much time to upload to the graphics card) and then we each time we change the color of the stimulus we just a new RGB triplet (only 3 numbers) without having to recalculate the texture. As a result, on graphics cards that support shaders, changing colors, contrasts and opacities etc. has almost zero overhead.

Blend Mode

A ‘blend function’ determines how the values of new pixels being drawn should be combined with existing pixels in the ‘frame buffer’.

blendMode = ‘avg’

This mode is exactly akin to the real-world scenario of objects with varying degrees of transparency being placed in front of each other; increasingly transparent objects allow increasing amounts of the underlying stimuli to show through. Opaque stimuli will simply occlude previously drawn objects. With each increasing semi-transparent object to be added, the visibility of the first object becomes increasingly weak. The order in which stimuli are rendered is very important since it determines the ordering of the layers. Mathematically, each pixel colour is constructed from opacity*stimRGB + (1-opacity)*backgroundRGB. This was the only mode available before PsychoPy version 1.80 and remains the default for the sake of backwards compatibility.

blendMode = ‘add’

If the window blendMode is set to ‘add’ then the value of the new stimulus does not in any way replace that of the existing stimuli that have been drawn; it is added to it. In this case the value of opacity still affects the weighting of the new stimulus being drawn but the first stimulus to be drawn is never ‘occluded’ as such. The sum is performed using the signed values of the color representation in PsychoPy, with the mean grey being represented by zero. So a dark patch added to a dark background will get even darker. For grating stimuli this means that contrast is summed correctly.

This blend mode is ideal if you want to test, for example, the way that subjects perceive the sum of two potentially overlapping stimuli. It is also needed for rendering stereo/dichoptic stimuli to be viewed through colored anaglyph glasses.

If stimuli are combined in such a way that an impossible luminance value is requested of any of the monitor guns then that pixel will be out of bounds. In this case the pixel can either be clipped to provide the nearest possible colour, or can be artificially colored with noise, highlighting the problem if the user would prefer to know that this has happened.

Sync to VBL and wait for VBL

PsychoPy will always, if the graphics card allows it, synchronise the flipping of the window with the vertical blank interval (VBL aka VBI) of the screen. This prevents visual artefacts such as ‘tearing’ of moving stimuli. This does not, itself, indicate that the script also waits for the physical frame flip to occur before continuing. If the waitBlanking window argument is set to False then, although the window refreshes themselves will only occur in sync with the screen VBL, the win.flip() call will not actually wait for this to occur, such that preparations can continue immediately for the next frame. For rendering purposes this is actually optimal and will reduce the likelihood of frames being dropped during rendering.

By default the PsychoPy Window will also wait for the VBL (waitBlanking=True) . Although this is slightly less efficient for rendering purposes it is necessary if we need to know exactly when a frame flip occurred (e.g. to timestamp when the stimulus was physically presented). On most systems this will provide a very accurate measure of when the stimulus was presented (with a variance typically well below 1ms but this should be tested on your system).

Timing Issues and synchronisation

One of the key requirements of experimental control software is that it has good temporal precision. PsychoPy aims to be as precise as possible in this domain and can achieve excellent results depending on your experiment and hardware. It also provides you with a precise log file of your experiment to allow you to check the precision with which things occurred. Some general considerations are discussed here and there are links with Specific considerations for specific designs.

Something that people seem to forget (not helped by the software manufacturers that keep talking about their sub-millisecond precision) is that the monitor, keyboard and human participant DO NOT have anything like this sort of precision. Your monitor updates every 10-20ms depending on frame rate. If you use a CRT screen then the top is drawn before the bottom of the screen by several ms. If you use an LCD screen the whole screen can take around 20ms to switch from one image to the next. Your keyboard has a latency of 4-30ms, depending on brand and system.

So, yes, PsychoPy’s temporal precision is as good as most other equivalent applications, for instance the duration for which stimuli are presented can be synchronised precisely to the frame, but the overall accuracy is likely to be severely limited by your experimental hardware. To get very precise timing of responses etc., you need to use specialised hardware like button boxes and you need to think carefully about the physics of your monitor.

Warning

The information about timing in PsychoPy assumes that your graphics card is capable of synchronising with the monitor frame rate. For integrated Intel graphics chips (e.g. GMA 945) under Windows, this is not true and the use of those chips is not recommended for serious experimental use as a result. Desktop systems can have a moderate graphics card added for around £30 which will be vastly superior in performance.

Specific considerations for specific designs
Non-slip timing for imaging

For most behavioural/psychophysics studies timing is most simply controlled by setting some timer (e.g. a Clock()) to zero and waiting until it has reached a certain value before ending the trial. We might call this a ‘relative’ timing method, because everything is timed from the start of the trial/epoch. In reality this will cause an overshoot of some fraction of one screen refresh period (10ms, say). For imaging (EEG/MEG/fMRI) studies adding 10ms to each trial repeatedly for 10 minutes will become a problem, however. After 100 stimulus presentations your stimulus and scanner will be de-synchronised by 1 second.

There are two ways to get around this:

  1. Time by frames If you are confident that you aren’t dropping frames then you could base your timing on frames instead to avoid the problem.
  2. Non-slip (global) clock timing The other way, which for imaging is probably the most sensible, is to arrange timing based on a global clock rather than on a relative timing method. At the start of each trial you add the (known) duration that the trial will last to a global timer and then wait until that timer reaches the necessary value. To facilitate this, the PsychoPy Clock() was given a new add() method as of version 1.74.00 and a CountdownTimer() was also added.

The non-slip method can only be used in cases where the trial is of a known duration at its start. It cannot, for example, be used if the trial ends when the subject makes a response, as would occur in most behavioural studies.

Non-slip timing from the Builder

(new feature as of version 1.74.00)

When creating experiments in the Builder, PsychoPy will attempt to identify whether a particular Routine has a known endpoint in seconds. If so then it will use non-slip timing for this Routine based on a global countdown timer called routineTimer. Routines that are able to use this non-slip method are shown in green in the Flow, whereas Routines using relative timing are shown in red. So, if you are using PsychoPy for imaging studies then make sure that all the Routines within your loop of epochs are showing as green. (Typically your study will also have a Routine at the start waiting for the first scanner pulse and this will use relative timing, which is appropriate).

Detecting dropped frames

Occasionally you will drop frames if you:

  • try to do too much drawing
  • do it in an inefficient manner (write poor code)
  • have a poor computer/graphics card

Things to avoid:

  • recreating textures for stimuli
  • building new stimuli from scratch (create them once at the top of your script and then change them using stim.setOri(ori)(), stim.setPos([x,y]...)
Turn on frame time recording

The key sometimes is knowing if you are dropping frames. PsychoPy can help with that by keeping track of frame durations. By default, frame time tracking is turned off because many people don’t need it, but it can be turned on any time after Window creation setRecordFrameIntervals(), e.g.:

from psychopy import visual win = visual.Window([800,600]) win.setRecordFrameIntervals(True)

Since there are often dropped frames just after the system is initialised, it makes sense to start off with a fixation period, or a ready message and don’t start recording frame times until that has ended. Obviously if you aren’t refreshing the window at some point (e.g. waiting for a key press with an unchanging screen) then you should turn off the recording of frame times or it will give spurious results.

Warn me if I drop a frame

The simplest way to check if a frame has been dropped is to get PsychoPy to report a warning if it thinks a frame was dropped:

from psychopy import visual, logging
win = visual.Window([800,600])
win.setRecordFrameIntervals(True)
win._refreshThreshold=1/85.0+0.004 #i've got 85Hz monitor and want to allow 4ms tolerance
#set the log module to report warnings to the std output window (default is errors only)
logging.console.setLevel(logging.WARNING)
Show me all the frame times that I recorded

While recording frame times, these are simply appended, every frame to win.frameIntervals (a list). You can simply plot these at the end of your script using pylab:

import pylab
pylab.plot(win.frameIntervals)
pylab.show()

Or you could save them to disk. A convenience function is provided for this:

win.saveFrameIntervals(fileName=None, clear=True)

The above will save the currently stored frame intervals (using the default filename, ‘lastFrameIntervals.log’) and then clears the data. The saved file is a simple text file.

At any time you can also retrieve the time of the /last/ frame flip using win.lastFrameT (the time is synchronised with logging.defaultClock so it will match any logging commands that your script uses).

‘Blocking’ on the VBI

As of version 1.62 PsychoPy ‘blocks’ on the vertical blank interval meaning that, once Window.flip() has been called, no code will be executed until that flip actually takes place. The timestamp for the above frame interval measurements is taken immediately after the flip occurs. Run the timeByFrames demo in Coder to see the precision of these measurements on your system. They should be within 1ms of your mean frame interval.

Note that Intel integrated graphics chips (e.g. GMA 945) under win32 do not sync to the screen at all and so blocking on those machines is not possible.

Reducing dropped frames

There are many things that can affect the speed at which drawing is achieved on your computer. These include, but are probably not limited to; your graphics card, CPU, operating system, running programs, stimuli, and your code itself. Of these, the CPU and the OS appear to make rather little difference. To determine whether you are actually dropping frames see Detecting dropped frames.

Things to change on your system:
  1. make sure you have a good graphics card. Avoid integrated graphics chips, especially Intel integrated chips and especially on laptops (because on these you don’t get to change your mind so easily later). In particular, try to make sure that your card supports OpenGL 2.0

  2. shut down as many programs, including background processes. Although modern processors are fast and often have multiple cores, substantial disk/memory accessing can cause frame drops
    • anti-virus auto-updating (if you’re allowed)
    • email checking software
    • file indexing software
    • backup solutions (e.g. TimeMachine)
    • Dropbox
    • Synchronisation software
Writing optimal scripts
  1. run in full-screen mode (rather than simply filling the screen with your window). This way the OS doesn’t have to spend time working out what application is currently getting keyboard/mouse events.

  2. don’t generate your stimuli when you need them. Generate them in advance and then just modify them later with the methods like setContrast(), setOrientation() etc...

  3. calls to the following functions are comparatively slow; they require more CPU time than most other functions and then have to send a large amount of data to the graphics card. Try to use these methods in inter-trial intervals. This is especially true when you need to load an image from disk too as the texture.
    1. GratingStim.setTexture()
    2. RadialStim.setTexture()
    3. TextStim.setText()
  4. if you don’t have OpenGL 2.0 then calls to setContrast, setRGB and setOpacity will also be slow, because they also make a call to setTexture(). If you have shader support then this call is not necessary and a large speed increase will result.

  5. avoid loops in your python code (use numpy arrays to do maths with lots of elements)

  6. if you need to create a large number (e.g. greater than 10) similar stimuli, then try the ElementArrayStim

Possible good ideas

It isn’t clear that these actually make a difference, but they might).

  1. disconnect the internet cable (to prevent programs performing auto-updates?)
  2. on Macs you can actually shut down the Finder. It might help. See Alex Holcombe’s page here
  3. use a single screen rather than two (probably there is some graphics card overhead in managing double the number of pixels?)
Comparing Operating Systems under PsychoPy

This is an attempt to quantify the ability of PsychoPy draw without dropping frames on a variety of hardware/software. The following tests were conducted using the script at the bottom of the page. Note, of course that the hardware fully differs between the Mac and Linux/Windows systems below, but that both are standard off-the-shelf machines.

All of the below tests were conducted with ‘normal’ systems rather than anything that had been specifically optimised:
  • the machines were connected to network
  • did not have anti-virus turned off (except Ubuntu had no anti-virus)
  • they even all had dropbox clients running
  • Linux was the standard (not ‘realtime’ kernel)

No applications were actively being used by the operator while tests were run.

In order to test drawing under a variety of processing loads the test stimulus was one of:
  • a single drifting Gabor
  • 500 random dots continuously updating
  • 750 random dots continuously updating
  • 1000 random dots continuously updating
Common settings:
  • Monitor was a CRT 1024x768 100Hz
  • all tests were run in full screen mode with mouse hidden
System Differences:
  • the iMac was lower spec than the Windows/Linux box and running across two monitors (necessary in order to connect to the CRT)
  • the Windows/Linux box ran off a single monitor

Each run below gives the number of dropped frames out of a run of 10,000 (2.7 mins at 100Hz).

_ Windows XP Windows 7 Mac OS X 10.6 Ubuntu 11.10
_ (SP3) Enterprise Snow Leopard  
Gabor 0 5 0 0
500-dot RDK 0 5 54 3
750-dot RDK 21 7 aborted 1174
1000-dot RDK 776 aborted aborted aborted
GPU Radeon 5400 Radeon 5400 Radeon 2400 Radeon 5400
GPU driver Catalyst 11.11 Catalyst 11.11   Catalyst 11.11
CPU Core Duo 3GHz Core Duo 3GHz Core Duo 2.4GHz Core Duo 3GHz
RAM 4GB 4GB 2GB 4GB
I’ll gradually try to update these tests to include:
  • longer runs (one per night!)
  • a faster Mac
  • a real-time Linux kernel
Other questions about timing
Can PsychoPy deliver millisecond precision?

The simple answer is ‘yes’, given some additional hardware. The clocks that PsychoPy uses do have sub-millisecond precision but your keyboard has a latency of 4-25ms depending on your platform and keyboard. You could buy a response pad (e.g. a Cedrus Response Pad ) and use PsychoPy’s serial port commands to retrieve information about responses and timing with a precision of around 1ms.

Before conducting your experiment in which effects might be on the order of 1 ms, do consider that;
  • your screen has a temporal resolution of ~10 ms
  • your visual system has a similar upper limit (or you would notice the flickering screen)
  • human response times are typically in the range 200-400 ms and very variable
  • USB keyboard latencies are variable, in the range 20-30ms

That said, PsychoPy does aim to give you as high a temporal precision as possible, and is likely not to be the limiting factor of your experiment.

Computer monitors

Monitors have fixed refresh rates, typically 60 Hz for a flat-panel display, higher for a CRT (85-100 Hz are common, up to 200 Hz is possible). For a refresh rate of 85 Hz there is a gap of 11.7 ms between frames and this limits the timing of stimulus presentation. You cannot have your stimulus appear for 100 ms, for instance; on an 85Hz monitor it can appear for either 94 ms (8 frames) or 105 ms (9 frames). There are further, less obvious, limitations however.

For ‘’CRT (cathode ray tube) screens’‘, the lines of pixels are drawn sequentially from the top to the bottom and once the bottom line has been drawn the screen is finished and the line returns to the top (the Vertical Blank Interval, VBI). Most of your frame interval is spent drawing the lines with 1-2ms being left for the VBI. This means that the pixels at the bottom are drawn ‘’‘up to 10 ms later’‘’ than the pixels at the top of the screen. At what point are you going to say your stimulus ‘appeared’ to the participant? For flat panel displays, or (or LCD projectors) your image will be presented simultaneously all over the screen, but it takes up to 20 ms(!!) for your pixels to go all the way from black to white (manufacturers of these panels quote values of 3 ms for the fastest panels, but they certainly don’t mean 3 ms white-to-black, I assume they mean 3 ms half-life).

_images/TopOfScreen.jpg

Figure 1: photodiode trace at top of screen. The image above shows the luminance trace of a CRT recorded by a fast photo-sensitive diode at the top of the screen when a stimulus is requested (shown by the square wave). The square wave at the bottom is from a parallel port that indicates when the stimulus was flipped to the screen. Note that on a CRT the screen at any point is actually black for the majority of the time and just briefly bright. The visual system integrates over a large enough time window not to notice this. On the next frame after the stimulus ‘presentation time’ the luminance of the screen flash increased.

_images/BottOfScreen.jpg

Figure 2: photodiode trace of the same large stimulus at bottom of screen. The image above shows comes from exactly the same script as the above but the photodiode is positioned at the bottom of the screen. In this case, after the stimulus is ‘requested’ the current frame (which is dark) finishes drawing and then, 10ms later than the above image, the screen goes bright at the bottom.

Warning

If you’re using a regular computer display, you have a hardware-limited temporal precision of 10 ms irrespective of your response box or software clocks etc... and should bear that in mind when looking for effect sizes of less than that.

Can I have my stimulus to appear with a very precise rate?

Yes. Generally to do that you should time your stimulus (its onset/offset, its rate of change...) using the frame refresh rather than a clock. e.g. you should write your code to say ‘for 20 frames present this stimulus’ rather than ‘for 300ms present this stimulus’. Provided your graphics card is set to synchronise page-flips with the vertical blank, and provided that you aren’t dropping frames the frame rate will always be absolutely constant.

Glossary

Adaptive staircase
An experimental method whereby the choice of stimulus parameters is not pre-determined but based on previous responses. For example, the difficulty of a task might be varied trial-to-trial based on the participant’s responses. These are often used to find psychophysical thresholds. Contrast this with the method of constants.
CRT
Cathode Ray Tube ‘Traditional’ computer monitor (rather than an LCD or plasma flat screen).
csv
Comma-Separated Value files Type of basic text file with ‘comma-separated values’. This type of file can be opened with most spreadsheet packages (e.g. MS Excel) for easy reading and manipulation.
Method of constants
An experimental method whereby the parameters controlling trials are predetermined at the beginning of the experiment, rather than determined on each trial. For example, a stimulus may be presented for 3 pre-determined time periods (100, 200, 300ms) on different trials, and then repeated a number of times. The order of presentation of the different conditions can be randomised or sequential (in a fixed order). Contrast this method with the adaptive staircase.
VBI
(Vertical Blank Interval, aka the Vertical Retrace, or Vertical Blank, VBL). The period in-between video frames and can be used for synchronising purposes. On a CRT display the screen is black during the VBI and the display beam is returned to the top of the display.
VBI blocking
The setting whereby all functions are synced to the VBI. After a call to psychopy.visual.Window.flip() nothing else occurs until the VBI has occurred. This is optimal and allows very precise timing, because as soon as the flip has occurred a very precise time interval is known to have occurred.
VBI syncing
(aka vsync) The setting whereby the video drawing commands are synced to the VBI. When psychopy.visual.Window.flip() is called, the current back buffer (where drawing commands are being executed) will be held and drawn on the next VBI. This does not necessarily entail VBI blocking (because the system may return and continue executing commands) but does guarantee a fixed interval between frames being drawn.
xlsx
Excel OpenXML file format. A spreadsheet data format developed by Microsoft but with an open (published format). This is the native file format for Excel (2007 or later) and can be opened by most modern spreadsheet applications including OpenOffice (3.0+), google docs, Apple iWork 08.

Installation

Overview

PsychoPy can be installed in three main ways:

  • As an application: The “Stand Alone” versions include everything you need to create and run experiments. When in doubt, choose this option.
  • As libraries: PsychoPy and the libraries it depends on can also be installed individually, providing greater flexibility. This option requires managing a python environment.
  • As source code: If you want to customize how PsychoPy works, consult the developer’s guide for installation and work-flow suggestions.

When you start PsychoPy for the first time, a Configuration Wizard will retrieve and summarize key system settings. Based on the summary, you may want to adjust some preferences to better reflect your environment. In addition, this is a good time to unpack the Builder demos to a location of your choice. (See the Demo menu in the Builder.)

If you get stuck or have questions, please email the mailing list.

If all goes well, at this point your installation will be complete! See the next section of the manual, Getting started.

Windows

Once installed, you’ll now find a link to the PsychoPy application in > Start > Programs > PsychoPy2. Click that and the Configuration Wizard should start.

The wizard will try to make sure you have reasonably current drivers for your graphics card. You may be directed to download the latest drivers from the vendor, rather than using the pre-installed Windows drivers. If necessary, get new drivers directly from the graphics card vendor; don’t rely on Windows updates. The Windows-supplied drivers are buggy and sometimes don’t support OpenGL at all.

The StandAlone installer adds the PsychoPy folder to your path, so you can run the included version of python from the command line. If you have your own version of python installed as well then you need to check which one is run by default, and change your path according to your personal preferences.

Mac OS X

There are different ways to install PsychoPy on a Mac that will suit different users. Almost all Mac’s come with a suitable video card by default.

  • Intel Mac users (with OS X v10.7 or higher; 10.5 and 10.6 might still work) can simply download the standalone application bundle (the dmg file) and drag it to their Applications folder. (Installing it elsewhere should work fine too.)

  • Users of macports can install PsychoPy and all its dependencies simply with:

    sudo port install py25-psychopy
    

    (Thanks to James Kyles.)

  • For PPC Macs (or for Intel Mac users that want their own custom python for running PsychoPy) you need to install the dependencies and PsychoPy manually. The easiest way is to use the Enthought Python Distribution (see Dependencies, below).

  • You could alternatively manually install the ‘framework build’ of python and the dependencies (see below). One advantage to this is that you can then upgrade versions with:

    sudo easy_install -N -Z -U psychopy
    

Linux

Debian systems:

PsychoPy is in the Debian packages index so you can simply do:

sudo apt-get install psychopy

Ubuntu (and other Debian-based distributions):

  1. Add the following sources in Synaptic, in the Configuration > Repository dialog box, under “Other software”:

    deb http://neuro.debian.net/debian karmic main contrib non-free
    deb-src http://neuro.debian.net/debian karmic main contrib non-free
    
  2. Then follow the ‘Package authentification’ procedure described in http://neuro.debian.net/

  3. Then install the psychopy package under Synaptic or through sudo apt-get install psychopy which will install all dependencies.

(Thanks to Yaroslav Halchenko for the Debian and NeuroDebian package.)
Gentoo

PsychoPy is in the Gentoo Sceince Overlay (see this link for the ebuild files). After you have enabled the overlay simply run:

# emerge psychopy
Other systems:

You need to install the dependencies below. Then install PsychoPy:

$ sudo easy_install psychopy
...
Downloading http://psychopy.googlecode.com/files/PsychoPy-1.75.01-py2.7.egg

Dependencies

Like many open-source programs, PsychoPy depends on the work of many other people in the form of libraries.

Essential packages

Python: If you need to install python, or just want to, the easiest way is to use the Enthought Python Distribution, which is free for academic use. Be sure to get a 32-bit version. The only things it misses are avbin, pyo, and flac.

If you want to install each library individually rather than use the simpler distributions of packages above then you can download the following. Make sure you get the correct version for your OS and your version of Python. easy_install will work for many of these, but some require compiling from source.

  • python (32-bit only, version 2.6 or 2.7; 2.5 might work, 3.x will not)
  • avbin (movies) On mac: 1) Download version 5 from google (not a higher version). 2) Start terminal, type sudo mkdir -p /usr/local/lib . 3) cd to the unpacked avbin directory, type sh install.sh . 4) Start or restart PsychoPy, and from PsychoPy’s coder view shell, this should work: from pyglet.media import avbin . If you run a script and get an error saying ‘NoneType’ object has no attribute ‘blit’, it probably means you did not install version 5.
  • setuptools
  • numpy (version 0.9.6 or greater)
  • scipy (version 0.4.8 or greater)
  • pyglet (version 1.1.4, not version 1.2)
  • wxPython (version 2.8.10 or 2.8.11, not 2.9)
  • Python Imaging Library (sudo easy_install PIL)
  • matplotlib (for plotting and fast polygon routines)
  • lxml (needed for loading/saving builder experiment files)
  • openpyxl (for loading params from xlsx files)
  • pyo (sound, version 0.6.2 or higher, compile with —-no-messages)

These packages are only needed for Windows:

  • pywin32
  • winioport (to use the parallel port)
  • inpout32 (an alternative method to using the parallel port on Windows)
  • inpoutx64 (to use the parallel port on 64-bit Windows)

These packages are only needed for Linux:

Suggested packages

In addition to the required packages above, additional packages can be useful to PsychoPy users, e.g. for controlling hardware and performing specific tasks. These are packaged with the Standalone versions of PsychoPy but users with their own custom Python environment need to install these manually. Most of these can be installed with easy_install.

General packages:

  • psignifit for bootstrapping and other resampling tests
  • pyserial for interfacing with the serial port
  • parallel python (aka pp) for parallel processing
  • flac audio codec, for working with google-speech

Specific hardware interfaces:

For developers:

  • pytest and coverage for running unit tests
  • sphinx for building documentation

Getting Started

As an application, PsychoPy has two main views: the Builder view, and the Coder view. It also has a underlying API that you can call directly.

  1. Builder. You can generate a wide range of experiments easily from the Builder using its intuitive, graphical user interface (GUI). This might be all you ever need to do. But you can always compile your experiment into a python script for fine-tuning, and this is a quick way for experienced programmers to explore some of PsychoPy’s libraries and conventions.

    The Builder view
  2. Coder. For those comfortable with programming, the Coder view provides a basic code editor with syntax highlighting, code folding, and so on. Importantly, it has its own output window and Demo menu. The demos illustrate how to do specific tasks or use specific features; they are not whole experiments. The Coder tutorials should help get you going, and the API reference will give you the details.

    The Coder view

The Builder and Coder views are the two main aspects of the PsychoPy application. If you’ve installed the StandAlone version of PsychoPy on MS Windows then there should be an obvious link to PsychoPy in your > Start > Programs. If you installed the StandAlone version on Mac OS X then the application is where you put it (!). On these two platforms you can open the Builder and Coder views from the View menu and the default view can be set from the preferences. On Linux, you can start PsychoPy from a command line, or make a launch icon (which can depend on the desktop and distro). If the PsychoPy app is started with flags —-coder (or -c), or —-builder (or -b), then the preferences will be overridden and that view will be created as the app opens.

For experienced python programmers, it’s possible to use PsychoPy without ever opening the Builder or Coder. Install the PsychoPy libraries and dependencies, and use your favorite IDE instead of the Coder.

Builder

When learning a new computer language, the classic first program is simply to print or display “Hello world!”. Lets do it.

A first program

Start PsychoPy, and be sure to be in the Builder view.

  • If you have poked around a bit in the Builder already, be sure to start with a clean slate. To get a new Builder view, type Ctrl-N on Windows or Linux, or Cmd-N on Mac.

  • Click on a Text component
    _images/text.png
    and a Text Properties dialog will pop up.
    _images/textdialog.png
  • In the Text field, replace the default text with your message. When you run the program, the text you type here will be shown on the screen.

  • Click OK (near the bottom of the dialog box). (Properties dialogs have a link to online help—an icon at the bottom, near the OK button.)

  • Your text component now resides in a routine called trial. You can click on it to view or edit it. (Components, Routines, and other Builder concepts are explained in the Builder documentation.)

  • Back in the main Builder, type Ctrl-R (Windows, Linux) or Cmd-R (Mac), or use the mouse to click the Run icon.
    _images/run32.png
Assuming you typed in “Hello world!”, your screen should have looked like this (briefly):
_images/helloworld.png

If nothing happens or it looks wrong, recheck all the steps above; be sure to start from a new Builder view.

What if you wanted to display your cheerful greeting for longer than the default time?

  • Click on your Text component (the existing one, not a new one).
  • Edit the Stop duration (s) to be 3.2; times are in seconds.
  • Click OK.
  • And finally Run.

When running an experiment, you can quit by pressing the escape key (this can be configured or disabled). You can quit PsychoPy from the File menu, or typing Ctrl-Q / Cmd-Q.

Getting beyond Hello

To do more, you can try things out and see what happens. You may want to consult the Builder documentation. Many people find it helpful to explore the Builder demos, in part to see what is possible, and especially to see how different things are done.

A good way to develop your own first PsychoPy experiment is to base it on the Builder demo that seems closest. Copy it, and then adapt it step by step to become more and more like the program you have in mind. Being familiar with the Builder demos can only help this process.

You could stop here, and just use the Builder for creating your experiments. It provides a lot of the key features that people need to run a wide variety of studies. But it does have its limitations. When you want to have more complex designs or features, you’ll want to investigate the Coder. As a segue to the Coder, lets start from the Builder, and see how Builder programs work.

Builder-to-coder

Whenever you run a Builder experiment, PsychoPy will first translate it into python code, and then execute that code.

To get a better feel for what was happening “behind the scenes” in the Builder program above:

  • In the Builder, load or recreate your “hello world” program.

  • Instead of running the program, explicitly convert it into python: Type F5, or click the Compile icon:
    _images/compile32.png

The view will automatically switch to the Coder, and display the python code. If you then save and run this code, it would look the same as running it directly from the Builder.

It is always possible to go from the Builder to python code in this way. You can then edit that code and run it as a python program. However, you cannot go from code back to a Builder representation.

To switch quickly between Builder and Coder views, you can type Ctrl-L / Cmd-L.

Coder

Being able to inspect Builder-generated code is nice, but it’s possible to write code yourself, directly. With the Coder and various libraries, you can do virtually anything that your computer is capable of doing, using a full-featured modern programming language (python).

For variety, lets say hello to the Spanish-speaking world. PsychoPy knows Unicode (UTF-8).

If you are not in the Coder, switch to it now.

  • Start a new code document: Ctrl-N / Cmd-N.

  • Type (or copy & paste) the following:

    from psychopy import visual, core
    
    win = visual.Window()
    msg = visual.TextStim(win, text=u"\u00A1Hola mundo!")
    
    msg.draw()
    win.flip()
    core.wait(1)
    win.close()
    
  • Save the file (the same way as in Builder).

  • Run the script.

Note that the same events happen on-screen with this code version, despite the code being much simpler than the code generated by the Builder. (The Builder actually does more, such as prompt for a subject number.)

Coder Shell

The shell provides an interactive python interpreter, which means you can enter commands here to try them out. This provides yet another way to send your salutations to the world. By default, the Coder’s output window is shown at the bottom of the Coder window. Click on the Shell tab, and you should see python’s interactive prompt, >>>:

PyShell in PsychoPy - type some commands!

Type "help", "copyright", "credits" or "license" for more information.
>>>

At the prompt, type:

>>> print u"\u00A1Hola mundo!"

You can do more complex things, such as type in each line from the Coder example directly into the Shell window, doing so line by line:

>>> from psychopy import visual, core

and then:

>>> win = visual.Window()
and so on—watch what happens each line::
>>> msg = visual.TextStim(win, text=u"\u00A1Hola mundo!")
>>> msg.draw()
>>> win.flip()

and so on. This lets you try things out and see what happens line-by-line (which is how python goes through your program).

Builder

Building experiments in a GUI

You can now see a youtube PsychoPy tutorial showing you how to build a simple experiment in the Builder interface

Note

The Builder view is now (at version 1.75) fairly well-developed and should be able to construct a wide variety of studies. But you should still check carefully that the stimuli and response collection are as expected.

The Builder view

Contents:

Builder concepts

Routines and Flow

The Builder view of the PsychoPy application is designed to allow the rapid development of a wide range of experiments for experimental psychology and cognitive neuroscience experiments.

The Builder view comprises two main panels for viewing the experiment’s Routines (upper left) and another for viewing the Flow (lower part of the window).

An experiment can have any number of Routines, describing the timing of stimuli, instructions and responses. These are portrayed in a simple track-based view, similar to that of video-editing software, which allows stimuli to come on go off repeatedly and to overlap with each other.

The way in which these Routines are combined and/or repeated is controlled by the Flow panel. All experiments have exactly one Flow. This takes the form of a standard flowchart allowing a sequence of routines to occur one after another, and for loops to be inserted around one or more of the Routines. The loop also controls variables that change between repetitions, such as stimulus attributes.

Example 1 - a reaction time experiment

For a simple reaction time experiment there might be 3 Routines, one that presents instructions and waits for a keypress, one that controls the trial timing, and one that thanks the participant at the end. These could then be combined in the Flow so that the instructions come first, followed by trial, followed by the thanks Routine, and a loop could be inserted so that the Routine repeated 4 times for each of 6 stimulus intensities.

Example 2 - an fMRI block design
Many fMRI experiments present a sequence of stimuli in a block. For this there are multiple ways to create the experiment:
  • We could create a single Routine that contained a number of stimuli and presented them sequentially, followed by a long blank period to give the inter-epoch interval, and surround this single Routine by a loop to control the blocks.
  • Alternatively we could create a pair of Routines to allow presentation of a) a single stimulus (for 1 sec) and b) a blank screen, for the prolonged period. With these Routines we could insert pair of loops, one to repeat the stimulus Routine with different images, followed by the blank Routine, and another to surround this whole set and control the blocks.
Demos

There are a couple of demos included with the package, that you can find in their own special menu. When you load these the first thing to do is make sure the experiment settings specify the same resolution as your monitor, otherwise the screen can appear off-centred and strangely scaled.

Stroop demo

This runs a digital demonstration of the Stroop effect [1]. The experiment presents a series of coloured words written in coloured ‘inks’. Subjects have to report the colour of the letters for each word, but find it harder to do so when the letters are spelling out a different (incongruous) colour. Reaction times for the congruent trials (where letter colour matches the written word) are faster than for the incongruent trials.

From this demo you should note:
  • How to setup a trial list in a .csv or .xlsx file
  • How to record key presses and reaction times (using the resp Component in trial Routine)
  • How to change a stimulus parameter on each repetition of the loop. The text and rgb values of the word Component are based on thisTrial, which represents a single iteration of the trials loop. They have been set to change every repeat (don’t forget that step!)
  • How to present instructions: just have a long-lasting TextStim and then force end of the Routine when a key is pressed (but don’t bother storing the key press).
[1]Stroop, J.R. (1935). “Studies of interference in serial verbal reactions”. Journal of Experimental Psychology 18: 643-662.
Psychophysics Staircase demo

This is a mini psychophysics experiment, designed to find the contrast detection threshold of a gabor i.e. find the contrast where the observer can just see the stimulus.

From this demo you should note:
  • The opening dialog box requires the participant to enter the orientation of the stimulus, the required fields here are determined by ‘Experiment Info’ in ‘Preferences’ which is a python dictionary. This information is then entered into the stimulus parameters using ‘$expInfo[‘ori’]’
  • The phase of the stimulus is set to change every frame and its value is determined by the value of trialClock.getTime()*2. Every Routine has a clock associated with it that gets reset at the beginning of the iteration through the Routine. There is also a globalClock that can be used in the same way. The phase of a Patch Component ranges 0-1 (and wraps to that range if beyond it). The result in this case is that the grating drifts at a rate of 2Hz.
  • The contrast of the stimulus is determined using an adaptive staircase. The Staircase methods are different to those used for a loop which uses predetermined values. An important thing to note is that you must define the correct answer.

Routines

An experiment consists of one or more Routines. A Routine might specify the timing of events within a trial or the presentation of instructions or feedback. Multiple Routines can then be combined in the Flow, which controls the order in which these occur and the way in which they repeat.

To create a new Routine, use the Experiment menu. The display size of items within a routine can be adjusted (see the View menu).

Within a Routine there are a number of components. These components determine the occurrence of a stimulus, or the recording of a response. Any number of components can be added to a Routine. Each has its own line in the Routine view that shows when the component starts and finishes in time, and these can overlap.

For now the time axis of the Routines panel is fixed, representing seconds (one line is one second). This will hopefully change in the future so that units can also be number of frames (more precise) and can be scaled up or down to allow very long or very short Routines to be viewed easily. That’s on the wishlist...

Flow

In the Flow panel a number of Routines can be combined to form an experiment. For instance, your study may have a Routine that presented initial instructions and waited for a key to be pressed, followed by a Routine that presented one trial which should be repeated 5 times with various different parameters set. All of this is achieved in the Flow panel. You can adjust the display size of the Flow panel (see View menu).

Adding Routines

The Routines that the Flow will use should be generated first (although their contents can be added or altered at any time). To insert a Routine into the Flow click the appropriate button in the left of the Flow panel or use the Experiment menu. A dialog box will appear asking which of your Routines you wish to add. To select the location move the mouse to the section of the flow where you wish to add it and click on the black disk.

Loops

Loops control the repetition of Routines and the choice of stimulus parameters for each. PsychoPy can generate the next trial based on the method of constants or using an adaptive staircase. To insert a loop use the button on the left of the Flow panel, or the item in the Experiment menu of the Builder. The start and end of a loop is set in the same way as the location of a Routine (see above). Loops can encompass one or more Routines and other loops (i.e. they can be nested).

As with components in Routines, the loop must be given a name, which must be unique and made up of only alpha-numeric characters (underscores are allowed). I would normally use a plural name, since the loop represents multiple repeats of something. For example, trials, blocks or epochs would be good names for your loops.

It is usually best to use trial information that is contained in an external file (.xlsx or .csv). When inserting a loop into the flow you can browse to find the file you wish to use for this. An example of this kind of file can be found in the Stroop demo (trialTypes.xlsx). The column names are turned into variables (in this case text, letterColor, corrAns and congruent), these can be used to define parameters in the loop by putting a $ sign before them e.g. $text.

As the column names from the input file are used in this way they must have legal variable names i.e. they must be unique, have no punctuation or spaces (underscores are ok) and must not start with a digit.

The parameter Is trials exists because some loops are not there to indicate trials per se but a set of stimuli within a trial, or a set of blocks. In these cases we don’t want the data file to add an extra line with each pass around the loop. This parameter can be unchecked to improve (hopefully) your data file outputs. [Added in v1.81.00]

Method of Constants

Selecting a loop type of random, sequential, or fullRandom will result in a method of constants experiment, whereby the types of trials that can occur are predetermined. That is, the trials cannot vary depending on how the subject has responded on a previous trial. In this case, a file must be provided that describes the parameters for the repeats. This should be an Excel 2007 (xlsx) file or a comma-separated-value (csv ) file in which columns refer to parameters that are needed to describe stimuli etc. and rows one for each type of trial. These can easily be generated from a spreadsheet package like Excel. (Note that csv files can also be generated using most text editors, as long as they allow you to save the file as “plain text”; other output formats will not work, including “rich text”.) The top row should be a row of headers: text labels describing the contents of the respective columns. (Headers must also not include spaces or other characters other than letters, numbers or underscores and must not be the same as any variable names used elsewhere in your experiment.) For example, a file containing the following table:

ori   text    corrAns
0     aaa     left
90    aaa     left
0     bbb     right
90    bbb     right

would represent 4 different conditions (or trial types, one per line). The header line describes the parameters in the 3 columns: ori, text and corrAns. It’s really useful to include a column called corrAns that shows what the correct key press is going to be for this trial (if there is one).

If the loop type is sequential then, on each iteration through the Routines, the next row will be selected in the order listed in the file. Under a random order, the next row will be selected at random (without replacement); it can only be selected again after all the other rows have also been selected. nReps determines how many repeats will be performed (for all conditions). The total number of trials will be the number of conditions (= number of rows in the file, not counting the header row) times the number of repetitions, nReps. With the fullRandom option, the entire list of trials including repetitions is used in random order, allowing the same item to appear potentially many times in a row, and to repeat without necessarily having done all of the other trials. For example, with 3 repetitions, a file of trial types like this:

letter
a
b
c

could result in the following possible sequences. sequential could only ever give one sequence with this order: [a b c a b c a b c]. random will give one of 216 different orders (= 3! * 3! * 3! = nReps * (nTrials!) ), for example: [b a c a b c c a b]. Here the letters are effectively in sets of (abc) (abc) (abc), and randomization is only done within each set, ensuring (for example) that there are at least two a’s before the subject sees a 3rd b. Finally, fullRandom will return one of 362,880 different orders (= 9! = (nReps * nTrials)! ), such as [b b c a a c c a b], which random never would. There are no longer mini-blocks or “sets of trials” within the longer run. This means that, by chance, it would also be possible to get a very un-random-looking sequence like [a a a b b b c c c].

It is possible to achieve any sequence you like, subject to any constraints that are logically possible. To do so, in the file you specify every trial in the desired order, and the for the loop select sequential order and nReps=1.

Selecting a subset of conditions

In the standard Method of Constants you would use all the rows/conditions within your conditions file. However there are often times when you want to select a subset of your trials before randomising and repeating.

The parameter Select rows allows this. You can specify which rows you want to use by inserting values here:

  • 0,2,5 gives the 1st, 3rd and 5th entry of a list - Python starts with index zero)
  • random(4)*10 gives 4 indices from 0 to 10 (so selects 4 out of 11 conditions)
  • 5:10 selects the 6th to 9th rows
  • $myIndices uses a variable that you’ve already created

Note in the last case that 5:8 isn’t valid syntax for a variable so you cannot do:

myIndices = 5:8

but you can do:

myIndices = slice(5,8) #python object to represent a slice
myIndices = "5:8" #a string that PsychoPy can then parse as a slice later
myIndices = "5:8:2" #as above but

Note that PsychoPy uses Python’s built-in slicing syntax (where the first index is zero and the last entry of a slice doesn’t get included). You might want to check the outputs of your selection in the Python shell (bottom of the Coder view) like this:

>>> range(100)[5:8] #slice 5:8 of a standard set of indices
[5, 6, 7]
>>> range(100)[5:10:2] #slice 5:8 of a standard set of indices
[5, 7, 9, 11, 13, 15, 17, 19]

Check that the conditions you wanted to select are the ones you intended!

Staircase methods

The loop type staircase allows the implementation of adaptive methods. That is, aspects of a trial can depend on (or “adapt to”) how a subject has responded earlier in the study. This could be, for example, simple up-down staircases where an intensity value is varied trial-by-trial according to certain parameters, or a stop-signal paradigm to assess impulsivity. For this type of loop a ‘correct answer’ must be provided from something like a Keyboard Component. Various parameters for the staircase can be set to govern how many trials will be conducted and how many correct or incorrect answers make the staircase go up or down.

Accessing loop parameters from components

The parameters from your loops are accessible to any component enclosed within that loop. The simplest (and default) way to address these variables is simply to call them by the name of the parameter, prepended with $ to indicate that this is the name of a variable. For example, if your Flow contains a loop with the above table as its input trial types file then you could give one of your stimuli an orientation $ori which would depend on the current trial type being presented. Example scenarios:

  1. You want to loop randomly over some conditions in a loop called trials. Your conditions are stored in a csv file with headings ‘ori’, ‘text’, ‘corrAns’ which you provide to this loop. You can then access these values from any component using $ori, $text, and $corrAns
  2. You create a random loop called blocks and give it an Excel file with a single column called movieName listing filenames to be played. On each repeat you can access this with $movieName
  3. You create a staircase loop called stairs. On each trial you can access the current value in the staircase with $thisStair

Note

When you set a component to use a parameter that will change (e.g on each repeat through the loop) you should remember to change the component parameter from `constant` to `set every repeat` or `set every frame` or it won’t have any effect!

Reducing namespace clutter (advanced)

The downside of the above approach is that the names of trial parameters must be different between every loop, as well as not matching any of the predefined names in python, numpy and PsychoPy. For example, the stimulus called movie cannot use a parameter also called movie (so you need to call it movieName). An alternative method can be used without these restrictions. If you set the Builder preference unclutteredNamespace to True you can then access the variables by referring to parameter as an attribute of the singular name of the loop prepended with this. For example, if you have a loop called trials which has the above file attached to it, then you can access the stimulus ori with $thisTrial.ori. If you have a loop called blocks you could use $thisBlock.corrAns.

Now, although the name of the loop must still be valid and unique, the names of the parameters of the file do not have the same requirements (they must still not contain spaces or punctuation characters).

Components

Routines in the Builder contain any number of components, which typically define the parameters of a stimulus or an input/output device.

The following components are available, as at version 1.65, but further components will be added in the future including Parallel/Serial ports and other visual stimuli (e.g. GeometricStim).

Aperture Component

This component can be used to filter the visual display, as if the subject is looking at it through an opening. Currently only circular apertures are supported. Moreover, only one aperture is enabled at a time. You can’t “double up”: a second aperture takes precedence.

name : string
Everything in a PsychoPy experiment needs a unique name. The name should contain only letters, numbers and underscores (no punctuation marks or spaces).
start : float or integer
The time that the aperture should start having its effect. See Defining the onset/duration of components for details.
stop :
When the aperture stops having its effect. See Defining the onset/duration of components for details.
pos : [X,Y]
The position of the centre of the aperture, in the units specified by the stimulus or window.
size : integer
The size controls how big the aperture will be, in pixels, default = 120
units : pix
What units to use (currently only pix).

See also

API reference for Aperture

Cedrus Button Box Component

This component allows you to connect to a Cedrus Button Box to collect key presses.

Note that there is a limitation currently that a button box can only be used in a single Routine. Otherwise PsychoPy tries to initialise it twice which raises an error. As a workaround, you need to insert the start-routine and each-frame code from the button box into a code component for a second routine.

Properties
Name : string
Everything in a PsychoPy experiment needs a unique name. The name should contain only letters, numbers and underscores (no punctuation marks or spaces).
Start :
The time that the button box is first read. See Defining the onset/duration of components for details.
Stop :
Governs the duration for which the button box is first read. See Defining the onset/duration of components for details.
Force end of Routine : true/false
If this is checked, the first response will end the routine.
Allowed keys : None, or an integer, list, or tuple of integers 0-7
This field lets you specify which buttons (None, or some or all of 0 through 7) to listen to.
Store : (choice of: first, last, all, nothing)
Which button events to save in the data file. Events and the response times are saved, with RT being recorded by the button box (not by PsychoPy).
Store correct : true/false
If selected, a correctness value will be saved in the data file, based on a match with the given correct answer.
Correct answer: button
The correct answer, used by Store correct.
Discard previous : true/false
If selected, any previous responses will be ignored (typically this is what you want).
Advanced
Device number: integer
This is only needed if you have multiple Cedrus devices connected and you need to specify which to use.
Use box timer : true/false
Set this to True to use the button box timer for timing information (may give better time resolution)

See also

API reference for iolab

Code Component

The Code Component can be used to insert short pieces of python code into your experiments. This might be create a variable that you want for another Component, to manipulate images before displaying them, to interact with hardware for which there isn’t yet a pre-packaged component in PsychoPy (e.g. writing code to interact with the serial/parallel ports). See code uses below.

Be aware that the code for each of the components in your Routine are executed in the order they appear on the Routine display (from top to bottom). If you want your Code Component to alter a variable to be used by another component immediately, then it needs to be above that component in the view. You may want the code not to take effect until next frame however, in which case put it at the bottom of the Routine. You can move Components up and down the Routine by right-clicking on their icons.

Within your code you can use other variables and modules from the script. For example, all routines have a stopwatch-style Clock associated with them, which gets reset at the beginning of that repeat of the routine. So if you have a Routine called trial, there will be a Clock called trialClock and so you can get the time (in sec) from the beginning of the trial by using::
currentT = trialClock.getTime()

To see what other variables you might want to use, and also what terms you need to avoid in your chunks of code, compile your script before inserting the code object and take a look at the contents of that script.

Note that this page is concerned with Code Components specifically, and not all cases in which you might use python syntax within the Builder. It is also possible to put code into a non-code input field (such as the duration or text of a Text Component). The syntax there is slightly different (requiring a $ to trigger the special handling, or \$ to avoid triggering special handling). The syntax to use within a Code Component is always regular python syntax.

Parameters

The parameters of the Code Component simply specify the code that will get executed at 5 different points within the experiment. You can use as many or as few of these as you need for any Code Component:

Begin Experiment:
Things that need to be done just once, like importing a supporting module, initialising a variable for later use.
Begin Routine:
Certain things might need to be done just once at the start of a Routine e.g. at the beginning of each trial you might decide which side a stimulus will appear
Each Frame:
Things that need to updated constantly, throughout the experiment. Note that these will be executed exactly once per video frame (on the order of every 10ms), to give dynamic displays. Static displays do not need to be updated every frame.
End Routine:
At the end of the Routine (e.g. the trial) you may need to do additional things, like checking if the participant got the right answer
End Experiment:
Use this for things like saving data to disk, presenting a graph(?), or resetting hardware to its original state.
Example code uses
1. Set a random location for your target stimulus

There are many ways to do this, but you could add the following to the Begin Routine section of a Code Component at the top of your Routine. Then set your stimulus position to be $targetPos and set the correct answer field of a Keyboard Component to be $corrAns (set both of these to update on every repeat of the Routine).:

if random()>0.5:
    targetPos=[-2.0, 0.0]#on the left
    corrAns='left'
else:
    targetPos=[+2.0, 0.0]#on the right
    corrAns='right'
2. Create a patch of noise

As with the above there are many different ways to create noise, but a simple method would be to add the following to the Begin Routine section of a Code Component at the top of your Routine. Then set the image as $noiseTexture.:

noiseTexture = random.rand(128,128)*2.0-1
3. Send a feedback message at the end of the experiment

Create a Code Component with this in the Begin Experiment field:

expClock = core.Clock()

and with this in the End Experiment field:

print "Thanks for participating - that took %.2f minutes in total" %(expClock.getTime()/60.0)

(or you could create a Text Component with that as contents rather than printing it).

4. End a loop early.

Code components can also be used to control the end of a loop. See examples in Recipes:builderTerminateLoops.

What variables are available to use?

The most complete way to find this out for your particular script is to compile it and take a look at what’s in there. Below are some options that appear in nearly all scripts. Remember that those variables are Python objects and can have attributes of their own. You can find out about those attributes using:

dir(myObject)

Common PsychoPy variables:

  • expInfo: This is a Python Dictionary containing the information from the starting dialog box. e.g. That generally includes the ‘participant’ identifier. You can access that in your experiment using exp[‘participant’]
  • t: the current time (in seconds) measured from the start of this Routine
  • frameN: the number of /completed/ frames since the start of the Routine (=0 in the first frame)
  • win: the Window that the experiment is using

Your own variables:

  • anything you’ve created in a Code Component is available for the rest of the script. (Sometimes you might need to define it at the beginning of the experiment, so that it will be available throughout.)

  • the name of any other stimulus or the parameters from your file also exist as variables.

  • most Components have a status attribute, which is useful to determine whether a stimulus has NOT_STARTED, STARTED or FINISHED. For example, to play a tone at the end of a Movie Component (of unknown duration) you could set start of your tone to have the ‘condition’

    myMovieName.status==FINISHED
    

Selected contents of the numpy library and numpy.random are imported by default. The entire numpy library is imported as np, so you can use a several hundred maths functions by prepending things with ‘np.’:

  • random() , randint() , normal() , shuffle() options for creating arrays of random numbers.
  • sin(), cos(), tan(), and pi: For geometry and trig. By default angles are in radians, if you want the cosine of an angle specified in degrees use cos(angle*180/pi), or use numpy’s conversion functions, rad2deg(angle) and deg2rad(angle).
  • linspace(): Create an array of linearly spaced values.
  • log(), log10(): The natural and base-10 log functions, respectively. (It is a lowercase-L in log).
  • sum(), len(): For the sum and length of a list or array. To find an average, it is better to use average() (due to the potential for integer division issues with sum()/len() ).
  • average(), sqrt(), std(): For average (mean), square root, and standard deviation, respectively. Note: Be sure that the numpy standard deviation formula is the one you want!
  • np.______: Many math-related features are available through the complete numpy libraries, which are available within psychopy builder scripts as ‘np.’. For example, you could use np.hanning(3) or np.random.poisson(10, 10) in a code component.
Dots (RDK) Component

The Dots Component allows you to present a Random Dot Kinematogram (RDK) to the participant of your study. These are fields of dots that drift in different directions and subjects are typically required to identify the ‘global motion’ of the field.

There are many ways to define the motion of the signal and noise dots. In PsychoPy the way the dots are configured follows Scase, Braddick & Raymond (1996). Although Scase et al (1996) show that the choice of algorithm for your dots actually makes relatively little difference there are some potential gotchas. Think carefully about whether each of these will affect your particular case:

  • limited dot lifetimes: as your dots drift in one direction they go off the edge of the stimulus and are replaced randomly in the stimulus field. This could lead to a higher density of dots in the direction of motion providing subjects with an alternative cue to direction. Keeping dot lives relatively short prevents this.
  • noiseDots=’direction’: some groups have used noise dots that appear in a random location on each frame (noiseDots=’location’). This has the disadvantage that the noise dots not only have a random direction but also a random speed (whereas signal dots have a constant speed and constant direction)
  • signalDots=’same’: on each frame the dots constituting the signal could be the same as on the previous frame or different. If ‘different’, participants could follow a single dot for a long time and calculate its average direction of motion to get the ‘global’ direction, because the dots would sometimes take a random direction and sometimes take the signal direction.

As a result of these, the defaults for PsychoPy are to have signalDots that are from a ‘different’ population, noise dots that have random ‘direction’ and a dot life of 3 frames.

Parameters
name :
Everything in a PsychoPy experiment needs a unique name. The name should contain only letters, numbers and underscores (no punctuation marks or spaces).
start :
The time that the stimulus should first appear. See Defining the onset/duration of components for details.
stop :
Governs the duration for which the stimulus is presented. See Defining the onset/duration of components for details.
units : None, ‘norm’, ‘cm’, ‘deg’ or ‘pix’
If None then the current units of the Window will be used. See Units for the window and stimuli for explanation of other options.
nDots : int
number of dots to be generated
fieldPos : (x,y) or [x,y]
specifying the location of the centre of the stimulus.
fieldSize : a single value, specifying the diameter of the field
Sizes can be negative and can extend beyond the window.
fieldShape :
Defines the shape of the field in which the dots appear. For a circular field the nDots represents the average number of dots per frame, but on each frame this may vary a little.
dotSize
Always specified in pixels
dotLife : int
Number of frames each dot lives for (-1=infinite)
dir : float (degrees)
Direction of the signal dots
speed : float
Speed of the dots (in units per frame)
signalDots :
If ‘same’ then the signal and noise dots are constant. If different then the choice of which is signal and which is noise gets randomised on each frame. This corresponds to Scase et al’s (1996) categories of RDK.
noiseDots : ‘direction’, ‘position’ or ‘walk’
Determines the behaviour of the noise dots, taken directly from Scase et al’s (1996) categories. For ‘position’, noise dots take a random position every frame. For ‘direction’ noise dots follow a random, but constant direction. For ‘walk’ noise dots vary their direction every frame, but keep a constant speed.

See also

API reference for DotStim

Grating Component

The Grating stimulus allows a texture to be wrapped/cycled in 2 dimensions, optionally in conjunction with a mask (e.g. Gaussian window). The texture can be a bitmap image from a variety of standard file formats, or a synthetic texture such as a sinusoidal grating. The mask can also be derived from either an image, or mathematical form such as a Gaussian.

When using gratings, if you want to use the spatial frequency setting then create just a single cycle of your texture and allow PsychoPy to handle the repetition of that texture (do not create the cycles you’re expecting within the texture).

Gratings can have their position, orientation, size and other settings manipulated on a frame-by-frame basis. There is a performance advantage (in terms of milliseconds) to using images which are square and powers of two (32, 64, 128, etc.), however this is slight and would not be noticed in the majority of experiments.

Parameters
Name : string
Everything in a PsychoPy experiment needs a unique name. The name should contain only letters, numbers and underscores (no punctuation marks or spaces).
Start :
The time that the stimulus should first appear. See Defining the onset/duration of components for details.
Stop :
Governs the duration for which the stimulus is presented. See Defining the onset/duration of components for details.
Color :
See Color spaces
Color space : rgb, dkl or lms
See Color spaces
Opacity : 0-1
Can be used to create semi-transparent gratings
Orientation : degrees
The orientation of the entire patch (texture and mask) in degrees.
Position : [X,Y]
The position of the centre of the stimulus, in the units specified by the stimulus or window
Size : [sizex, sizey] or a single value (applied to x and y)
The size of the stimulus in the given units of the stimulus/window. If the mask is a Gaussian then the size refers to width at 3 standard deviations on either side of the mean (i.e. sd=size/6)
Units : deg, cm, pix, norm, or inherit from window
See Units for the window and stimuli
Advanced Settings
Texture: a filename, a standard name (sin, sqr) or a variable giving a numpy array
This specifies the image that will be used as the texture for the visual patch. The image can be repeated on the patch (in either x or y or both) by setting the spatial frequency to be high (or can be stretched so that only a subset of the image appears by setting the spatial frequency to be low). Filenames can be relative or absolute paths and can refer to most image formats (e.g. tif, jpg, bmp, png, etc.). If this is set to none, the patch will be a flat colour.
Mask : a filename, a standard name (gauss, circle, raisedCos) or a numpy array of dimensions NxNx1
The mask can define the shape (e.g. circle will make the patch circular) or something which overlays the patch e.g. noise.
Interpolate :
If linear is selected then linear interpolation will be applied when the image is rescaled to the appropriate size for the screen. Nearest will use a nearest-neighbour rule.
Phase : single float or pair of values [X,Y]
The position of the texture within the mask, in both X and Y. If a single value is given it will be applied to both dimensions. The phase has units of cycles (rather than degrees or radians), wrapping at 1. As a result, setting the phase to 0,1,2... is equivalent, causing the texture to be centered on the mask. A phase of 0.25 will cause the image to shift by half a cycle (equivalent to pi radians). The advantage of this is that is if you set the phase according to time it is automatically in Hz.
Spatial Frequency : [SFx, SFy] or a single value (applied to x and y)
The spatial frequency of the texture on the patch. The units are dependent on the specified units for the stimulus/window; if the units are deg then the SF units will be cycles/deg, if units are norm then the SF units will be cycles per stimulus. If this is set to none then only one cycle will be displayed.
Texture Resolution : an integer (power of two)
Defines the size of the resolution of the texture for standard textures such as sin, sqr etc. For most cases a value of 256 pixels will suffice, but if stimuli are going to be very small then a lower resolution will use less memory.

See also

API reference for GratingStim

Image Component

The Image stimulus allows an image to be presented, which can be a bitmap image from a variety of standard file formats, with an optional transparency mask that can effecively control the shape of the image. The mask can also be derived from an image file, or mathematical form such as a Gaussian.

It is a really good idea to get your image in roughly the size (in pixels) that it will appear on screen to save memory. If you leave the resolution at 12 megapixel camera, as taken from your camera, but then present it on a standard screen at 1680x1050 (=1.6 megapixels) then PsychPy and your graphics card have to do an awful lot of unecessary work. There is a performance advantage (in terms of milliseconds) to using images which are square and powers of two (32, 64, 128, etc.), but this is slight and would not be noticed in the majority of experiments.

Images can have their position, orientation, size and other settings manipulated on a frame-by-frame basis.

Parameters
Name : string
Everything in a PsychoPy experiment needs a unique name. The name should contain only letters, numbers and underscores (no punctuation marks or spaces).
Start :
The time that the stimulus should first appear. See Defining the onset/duration of components for details.
Stop :
Governs the duration for which the stimulus is presented. See Defining the onset/duration of components for details.
Image : a filename or a standard name (sin, sqr)
Filenames can be relative or absolute paths and can refer to most image formats (e.g. tif, jpg, bmp, png, etc.). If this is set to none, the patch will be a flat colour.
Position : [X,Y]
The position of the centre of the stimulus, in the units specified by the stimulus or window
Size : [sizex, sizey] or a single value (applied to x and y)
The size of the stimulus in the given units of the stimulus/window. If the mask is a Gaussian then the size refers to width at 3 standard deviations on either side of the mean (i.e. sd=size/6) Set this to be blank to get the image in its native size.
Orientation : degrees
The orientation of the entire patch (texture and mask) in degrees.
Opacity : value from 0 to 1
If opacity is reduced then the underlying images/stimuli will show through
Units : deg, cm, pix, norm, or inherit from window
See Units for the window and stimuli
Advanced Settings
Color : Colors can be applied to luminance-only images (not to rgb images)
See Color spaces
Color space : to be used if a color is supplied
See Color spaces
Mask : a filename, a standard name (gauss, circle, raisedCos) or a numpy array of dimensions NxNx1
The mask can define the shape (e.g. circle will make the patch circular) or something which overlays the patch e.g. noise.
Interpolate :
If linear is selected then linear interpolation will be applied when the image is rescaled to the appropriate size for the screen. Nearest will use a nearest-neighbour rule.
Texture Resolution:
This is only needed if you use a synthetic texture (e.g. sinusoidal grating) as the image.

See also

API reference for ImageStim

ioLab Systems buttonbox Component

A button box is a hardware device that is used to collect participant responses with high temporal precision, ideally with true ms accuracy.

Both the response (which button was pressed) and time taken to make it are returned. The time taken is determined by a clock on the device itself. This is what makes it capable (in theory) of high precision timing.

Check the log file to see how long it takes for PsychoPy to reset the button box’s internal clock. If this takes a while, then the RT timing values are not likely to be high precision. It might be possible for you to obtain a correction factor for your computer + button box set up, if the timing delay is highly reliable.

The ioLabs button box also has a built-in voice-key, but PsychoPy does not have an interface for it. Use a microphone component instead.

Properties
name : string
Everything in a PsychoPy experiment needs a unique name. The name should contain only letters, numbers and underscores (no punctuation marks or spaces).
start :
The time that the stimulus should first appear. See Defining the onset/duration of components for details.
stop :
The duration for which the stimulus is presented. See Defining the onset/duration of components for details.
Force end of Routine : checkbox
If this is checked, the first response will end the routine.
Active buttons : None, or an integer, list, or tuple of integers 0-7
The ioLabs box lets you specify a set of active buttons. Responses on non-active buttons are ignored by the box, and never sent to PsychoPy. This field lets you specify which buttons (None, or some or all of 0 through 7).
Lights :

If selected, the lights above the active buttons will be turned on.

Using code components, it is possible to turn on and off specific lights within a trial. See the API for iolab.

Store : (choice of: first, last, all, nothing)
Which button events to save in the data file. Events and the response times are saved, with RT being recorded by the button box (not by PsychoPy).
Store correct : checkbox
If selected, a correctness value will be saved in the data file, based on a match with the given correct answer.
Correct answer: button
The correct answer, used by Store correct.
Discard previous : checkbox
If selected, any previous responses will be ignored (typically this is what you want).
Lights off : checkbox
If selected, all lights will be turned off at the end of each routine.

See also

API reference for iolab

Keyboard Component

The Keyboard component can be used to collect responses from a participant.

By not storing the key press and checking the forceEndTrial box it can be used simply to end a Routine

Parameters
Name : string
Everything in a PsychoPy experiment needs a unique name. The name should contain only letters, numbers and underscores (no punctuation marks or spaces).
Start : float or integer
The time that the keyboard should first get checked. See Defining the onset/duration of components for details.
Stop :
When the keyboard is no longer checked. See Defining the onset/duration of components for details.
Force end routine
If this box is checked then the Routine will end as soon as one of the allowed keys is pressed.
Allowed keys
A list of allowed keys can be specified here, e.g. [‘m’,’z’,‘1’,‘2’], or the name of a variable holding such a list. If this box is left blank then any key that is pressed will be read. Only allowed keys count as having been pressed; any other key will not be stored and will not force the end of the Routine. Note that key names (even for number keys) should be given in single quotes, separated by commas. Cursor control keys can be accessed with ‘up’, ‘down’, and so on; the space bar is ‘space’. To find other special keys, run the Coder Input demo, “what_key.py”, press the key, and check the Coder output window.
Store
Which key press, if any, should be stored; the first to be pressed, the last to be pressed or all that have been pressed. If the key press is to force the end of the trial then this setting is unlikely to be necessary, unless two keys happen to be pressed in the same video frame. The response time will also be stored if a keypress is recorded. This time will be taken from the start of keyboard checking (e.g. if the keyboard was initiated 2 seconds into the trial and a key was pressed 3.2s into the trials the response time will be recorded as 1.2s).
Store correct
Check this box if you wish to store whether or not this key press was correct. If so then fill in the next box that defines what would constitute a correct answer e.g. left, 1 or $corrAns (note this should not be in inverted commas). This is given as Python code that should return True (1) or False (0). Often this correct answer will be defined in the settings of the Loops.
Discard previous
Check this box to ensure that only key presses that occur during this keyboard checking period are used. If this box is not checked a keyboard press that has occurred before the start of the checking period will be interpreted as the first keyboard press. For most experiments this box should be checked.

See also

API reference for Mouse

Microphone Component

Please note: This is a new component, and is subject to change.

The microphone component provides a way to record sound during an experiment. To do so, specify the starting time relative to the start of the routine (see start below) and a stop time (= duration in seconds). A blank duration evaluates to recording for 0.000s.

The resulting sound files are saved in .wav format (at 48000 Hz, 16 bit), one file per recording. The files appear in a new folder within the data directory (the subdirectory name ends in _wav). The file names include the unix (epoch) time of the onset of the recording with milliseconds, e.g., mic-1346437545.759.wav.

It is possible to stop a recording that is in progress by using a code component. Every frame, check for a condition (such as key ‘q’, or a mouse click), and call the .stop() method of the microphone component. The recording will end at that point and be saved. For example, if mic is the name of your microphone component, then in the code component, do this on Each frame:

if event.getKeys(['q']):
    mic.stop()
Parameters
name : string
Everything in a PsychoPy experiment needs a unique name. The name should contain only letters, numbers and underscores (no punctuation marks or spaces).
start : float or integer
The time that the stimulus should first play. See Defining the onset/duration of components for details.
stop (duration):
The length of time (sec) to record for. An expected duration can be given for visualisation purposes. See Defining the onset/duration of components for details; note that only seconds are allowed.

See also

API reference for AdvAudioCapture

Mouse Component

The Mouse component can be used to collect responses from a participant. The coordinates of the mouse location are given in the same coordinates as the Window, with (0,0) in the centre.

Scenarios

This can be used in various ways. Here are some scenarios (email the list if you have other uses for your mouse):

Use the mouse to record the location of a button press

Use the mouse to control stimulus parameters
Imagine you want to use your mouse to make your ‘patch’_ bigger or smaller and save the final size. Call your mouse ‘mouse’, set it to save its state at the end of the trial and set the button press to end the Routine. Then for the size setting of your Patch stimulus insert $mouse.getPos()[0] to use the x position of the mouse to control the size or $mouse.getPos()[1] to use the y position.

Tracking the entire path of the mouse during a period

Parameters
Name : string
Everything in a PsychoPy experiment needs a unique name. The name should contain only letters, numbers and underscores (no punctuation marks or spaces).
start :
The time that the mouse should first be checked. See Defining the onset/duration of components for details.
stop :
When the mouse is no longer checked. See Defining the onset/duration of components for details.
Force End Routine on Press
If this box is checked then the Routine will end as soon as one of the mouse buttons is pressed.
Save Mouse State
How often do you need to save the state of the mouse? Every time the subject presses a mouse button, at the end of the trial, or every single frame? Note that the text output for cases where you store the mouse data repeatedly per trial (e.g. every press or every frame) is likely to be very hard to interpret, so you may then need to analyse your data using the psydat file (with python code) instead. Hopefully in future releases the output of the text file will be improved.
Time Relative To
Whenever the mouse state is saved (e.g. on button press or at end of trial) a time is saved too. Do you want this time to be relative to start of the Routine, or the start of the whole experiment?

See also

API reference for Mouse

Movie Component

The Movie component allows movie files to be played from a variety of formats (e.g. mpeg, avi, mov).

The movie can be positioned, rotated, flipped and stretched to any size on the screen (using the Units for the window and stimuli given).

Parameters
name : string
Everything in a PsychoPy experiment needs a unique name. The name should contain only letters, numbers and underscores (no punctuation marks or spaces).
start :
The time that the stimulus should first appear. See Defining the onset/duration of components for details.
stop :
Governs the duration for which the stimulus is presented (if you want to cut a movie short). Usually you can leave this blank and insert the Expected duration just for visualisation purposes. See Defining the onset/duration of components for details.
movie : string
The filename of the movie, including the path. The path can be absolute or relative to the location of the experiment (.psyexp) file.
pos : [X,Y]
The position of the centre of the stimulus, in the units specified by the stimulus or window
ori : degrees
Movies can be rotated in real-time too! This specifies the orientation of the movie in degrees.
size : [sizex, sizey] or a single value (applied to both x and y)
The size of the stimulus in the given units of the stimulus/window.
units : deg, cm, pix, norm, or inherit from window
See Units for the window and stimuli

See also

API reference for MovieStim

Parallel Port Out Component

This component allows you to send triggers to a parallel port or to a LabJack device.

An example usage would be in EEG experiments to set the port to 0 when no stimuli are present and then set it to an identifier value for each stimulus synchronised to the start/stop of that stimulus. In that case you mgiht set the Start data to be $ID (with ID being a column in your conditions file) and set the Stop Data to be 0.

Properties
Name : string
Everything in a PsychoPy experiment needs a unique name. The name should contain only letters, numbers and underscores (no punctuation marks or spaces).
Start :
The time that the stimulus should first appear. See Defining the onset/duration of components for details.
Stop :
Governs the duration for which the stimulus is presented. See Defining the onset/duration of components for details.
Port address : select the appropriate option
You need to know the address of the parallel port you wish to write to. The options that appear in this drop-down list are determined by the application preferences. You can add your particular port there if you prefer.
Start data : 0-255
When the start time/condition occurs this value will be sent to the parallel port. The value is given as a byte (a value from 0-255) controlling the 8 data pins of the parallel port.
Stop data : 0-255
As with start data but sent at the end of the period.
Sync to screen : boolean
If true then the parallel port will be sent synchronised to the next screen refresh, which is ideal if it should indicate the onset of a visual stimulus. If set to False then the data will be set on the parallel port immediately.

See also

API reference for iolab

Patch (image) Component

The Patch stimulus allows images to be presented in a variety of forms on the screen. It allows the combination of an image, which can be a bitmap image from a variety of standard file formats, or a synthetic repeating texture such as a sinusoidal grating. A transparency mask can also be control the shape of the image, and this can also be derived from either a second image, or mathematical form such as a Gaussian.

Patches can have their position, orientation, size and other settings manipulated on a frame-by-frame basis. There is a performance advantage (in terms of milliseconds) to using images which are square and powers of two (32, 64, 128, etc.), however this is slight and would not be noticed in the majority of experiments.

Parameters
name : string
Everything in a PsychoPy experiment needs a unique name. The name should contain only letters, numbers and underscores (no punctuation marks or spaces).
start :
The time that the stimulus should first appear. See Defining the onset/duration of components for details.
stop :
Governs the duration for which the stimulus is presented. See Defining the onset/duration of components for details.
image : a filename, a standard name (‘sin’, ‘sqr’) or a numpy array of dimensions NxNx1 or NxNx3
This specifies the image that will be used as the texture for the visual patch. The image can be repeated on the patch (in either x or y or both) by setting the spatial frequency to be high (or can be stretched so that only a subset of the image appears by setting the spatial frequency to be low). Filenames can be relative or absolute paths and can refer to most image formats (e.g. tif, jpg, bmp, png, etc.). If this is set to none, the patch will be a flat colour.
mask : a filename, a standard name (‘gauss’, ‘circle’) or a numpy array of dimensions NxNx1
The mask can define the shape (e.g. circle will make the patch circular) or something which overlays the patch e.g. noise.
ori : degrees
The orientation of the entire patch (texture and mask) in degrees.
pos : [X,Y]
The position of the centre of the stimulus, in the units specified by the stimulus or window
size : [sizex, sizey] or a single value (applied to x and y)
The size of the stimulus in the given units of the stimulus/window. If the mask is a Gaussian then the size refers to width at 3 standard deviations on either side of the mean (i.e. sd=size/6)
units : deg, cm, pix, norm, or inherit from window
See Units for the window and stimuli
Advanced Settings
colour :
See Color spaces
colour space : rgb, dkl or lms
See Color spaces
SF : [SFx, SFy] or a single value (applied to x and y)
The spatial frequency of the texture on the patch. The units are dependent on the specified units for the stimulus/window; if the units are deg then the SF units will be cycles/deg, if units are norm then the SF units will be cycles per stimulus. If this is set to none then only one cycle will be displayed.
phase : single float or pair of values [X,Y]
The position of the texture within the mask, in both X and Y. If a single value is given it will be applied to both dimensions. The phase has units of cycles (rather than degrees or radians), wrapping at 1. As a result, setting the phase to 0,1,2... is equivalent, causing the texture to be centered on the mask. A phase of 0.25 will cause the image to shift by half a cycle (equivalent to pi radians). The advantage of this is that is if you set the phase according to time it is automatically in Hz.
Texture Resolution : an integer (power of two)
Defines the size of the resolution of the texture for standard textures such as sin, sqr etc. For most cases a value of 256 pixels will suffice, but if stimuli are going to be very small then a lower resolution will use less memory.
interpolate :
If linear is selected then linear interpolation will be applied when the image is rescaled to the appropriate size for the screen. Nearest will use a nearest-neighbour rule.

See also

API reference for PatchStim

Polygon (shape) Component

(added in version 1.78.00)

The Polygon stimulus allows you to present a wide range of regular geometric shapes. The basic control comes from setting the number of vertices:
  • 2 vertices give a line
  • 3 give a triangle
  • 4 give a rectangle etc.
  • a large number will approximate a circle/ellipse

The size parameter takes two values. For a line only the first is used (then use ori to specify the orientation). For triangles and rectangles the size specifies the height and width as expected. Note that for pentagons upwards, however, the size determines the width/height of the ellipse on which the vertices will fall, rather than the width/height of the vertices themselves (slightly smaller typically).

Parameters
name : string
Everything in a PsychoPy experiment needs a unique name. The name should contain only letters, numbers and underscores (no punctuation marks or spaces).

nVertices : integer

The number of vertices for your shape (2 gives a line, 3 gives a triangle,... a large number results in a circle/ellipse). It is not (currently) possible to vary the number of vertices dynamically.

fill settings:

Control the color inside the shape. If you set this to None then you will have a transparent shape (the line will remain)

line settings:

Control color and width of the line. The line width is always specified in pixels - it does not honour the units parameter.
size : [w,h]
See note above
start :
The time that the stimulus should first appear. See Defining the onset/duration of components for details.
stop :
Governs the duration for which the stimulus is presented. See Defining the onset/duration of components for details.
ori : degrees
The orientation of the entire patch (texture and mask) in degrees.
pos : [X,Y]
The position of the centre of the stimulus, in the units specified by the stimulus or window
units : deg, cm, pix, norm, or inherit from window
See Units for the window and stimuli

See also

API reference for Polygon API reference for Rect API reference for ShapeStim #for arbitrary vertices

RatingScale Component

A rating scale is used to collect a numeric rating or a choice from a few alternatives, via the mouse, the keyboard, or both. Both the response and time taken to make it are returned.

A given routine might involve an image (patch component), along with a rating scale to collect the response. A routine from a personality questionnaire could have text plus a rating scale.

Three common usage styles are enabled on the first settings page:

‘visual analog scale’: the subject uses the mouse to position a marker on an unmarked line

‘category choices’: choose among verbal labels (categories, e.g., “True, False” or “Yes, No, Not sure”)

‘scale description’: used for numeric choices, e.g., 1 to 7 rating

Complete control over the display options is available as an advanced setting, ‘customize_everything’.

Properties
name : string
Everything in a PsychoPy experiment needs a unique name. The name should contain only letters, numbers and underscores (no punctuation marks or spaces).
start :
The time that the stimulus should first appear. See Defining the onset/duration of components for details.
stop :
The duration for which the stimulus is presented. See Defining the onset/duration of components for details.
visualAnalogScale : checkbox
If this is checked, a line with no tick marks will be presented using the ‘glow’ marker, and will return a rating from 0.00 to 1.00 (quasi-continuous). This is intended to bias people away from thinking in terms of numbers, and focus more on the visual bar when making their rating. This supersedes either choices or scaleDescription.
category choices : string
Instead of a numeric scale, you can present the subject with words or phrases to choose from. Enter all the words as a string. (Probably more than 6 or so will not look so great on the screen.) Spaces are assumed to separate the words. If there are any commas, the string will be interpreted as a list of words or phrases (possibly including spaces) that are separated by commas.
scaleDescription :
Brief instructions, reminding the subject how to interpret the numerical scale, default = “1 = not at all ... extremely = 7”
low : str
The lowest number (bottom end of the scale), default = 1. If it’s not an integer, it will be converted to lowAnchorText (see Advanced).
high : str
The highest number (top end of the scale), default = 7. If it’s not an integer, it will be converted to highAnchorText (see Advanced).
Advanced settings
single click :
If this box is checked the participant can only click the scale once and their response will be stored. If this box is not checked the participant must accept their rating before it is stored.
startTime : float or integer
The time (relative to the beginning of this Routine) that the rating scale should first appear.
forceEndTrial :
If checked, when the subject makes a rating the routine will be ended.
size : float
The size controls how big the scale will appear on the screen. (Same as “displaySizeFactor”.) Larger than 1 will be larger than the default, smaller than 1 will be smaller than the default.
pos : [X,Y]
The position of the centre of the stimulus, in the units specified by the stimulus or window. Default is centered left-right, and somewhat lower than the vertical center (0, -0.4).
duration :
The maximum duration in seconds for which the stimulus is presented. See duration for details. Typically, the subject’s response should end the trial, not a duration. A blank or negative value means wait for a very long time.
storeRatingTime:
Save the time from the beginning of the trial until the participant responds.
storeRating:
Save the rating that was selected
lowAnchorText : str
Custom text to display at the low end of the scale, e.g., “0%”; overrides ‘low’ setting
highAnchorText : str
Custom text to display at the low end of the scale, e.g., “100%”; overrides ‘high’ setting
customize_everything : str
If this is not blank, it will be used when initializing the rating scale just as it would be in a code component (see RatingScale). This allows access to all the customizable aspects of a rating scale, and supersedes all of the other RatingScale settings in the dialog panel. (This does not affect: startTime, forceEndTrial, duration, storeRatingTime, storeRating.)

See also

API reference for RatingScale

Sound Component
Parameters
name : string
Everything in a PsychoPy experiment needs a unique name. The name should contain only letters, numbers and underscores (no punctuation marks or spaces).
start : float or integer
The time that the stimulus should first play. See Defining the onset/duration of components for details.
stop :
For sounds loaded from a file leave this blank and then give the Expected duration below for visualisation purposes. See Defining the onset/duration of components for details.
sound :

This sound can be described in a variety of ways:

  • a number can specify the frequency in Hz (e.g. 440)
  • a letter gives a note name (e.g. “C”) and sharp or flat can also be added (e.g. “Csh” “Bf”)
  • a filename, which can be a relative or absolute path (mid, wav, and ogg are supported).
volume : float or integer
The volume with which the sound should be played. It’s a normalized value between 0 (minimum) and 1 (maximum).

See also

API reference for SoundPyo

Static Component

(Added in Version 1.78.00)

The Static Component allows you to have a period where you can preload images or perform other time-consuming operations that not be possible while the screen is being updated.

Typically a static period would be something like an inter-trial or inter-stimulus interval (ITI/ISI). During this period you should not have any other objects being presented that are being updated (this isn’t checked for you - you have to make that check yourself), but you can have components being presented that are themselves static. For instance a fixation point never changes and so it can be presented during the static period (it will be presented and left on-screen while the other updates are being made).

Any stimulus updates can be made to occur during any static period defined in the experiment (it does not have to be in the same Routine). This is done in the updates selection box- once a static period exists it will show up here as well as the standard options of constant and every repeat etc. Many parameter updates (e.g. orientation are made so quickly that using the static period is of no benefit but others, most notably the loading of images from disk, can take substantial periods of time and these should always be performed during a static period to ensure good timing.

If the updates that have been requested were not completed by the end of the static period (i.e. there was a timing overshoot) then you will receive a warning to that effect. In this case you either need a longer static period to perform the actions or you need to reduce the time required for the action (e.g. use an image with fewer pixels).

Parameters
name :
Everything in a PsychoPy experiment needs a unique name. The name should contain only letters, numbers and underscores (no punctuation marks or spaces).
start :
The time that the static period begins. See Defining the onset/duration of components for details.
stop :
The time that the static period ends. See Defining the onset/duration of components for details.
custom code :
After running the component updates (which are defined in each component, not here) any code inserted here will also be run

See also

API reference for StaticPeriod

Text Component

This component can be used to present text to the participant, either instructions or stimuli.

name : string
Everything in a PsychoPy experiment needs a unique name. The name should contain only letters, numbers and underscores (no punctuation marks or spaces).
start :
The time that the stimulus should first appear. See Defining the onset/duration of components for details.
stop :
The duration for which the stimulus is presented. See Defining the onset/duration of components for details.
color :
See Color spaces
color space : rgb, dkl or lms
See Color spaces
ori : degrees
The orientation of the stimulus in degrees.
pos : [X,Y]
The position of the centre of the stimulus, in the units specified by the stimulus or window
height : integer or float
The height of the characters in the given units of the stimulus/window. Note that nearly all actual letters will occupy a smaller space than this, depending on font, character, presence of accents etc. The width of the letters is determined by the aspect ratio of the font.
units : deg, cm, pix, norm, or inherit from window
See Units for the window and stimuli
opacity :
Vary the transparency, from 0.0 = invisible to 1.0 = opaque
flip :
Whether to mirror-reverse the text: ‘horiz’ for left-right mirroring, ‘vert’ for up-down mirroring. The flip can be set dynamically on a per-frame basis by using a variable, e.g., $mirror, as defined in a code component or conditions file and set to either ‘horiz’ or ‘vert’.

See also

API reference for TextStim

Entering parameters

Most of the entry boxes for Component parameters simply receive text or numeric values or lists (sequences of values surrounded by square brackets) as input. In addition, the user can insert variables and code into most of these, which will be interpreted either at the beginning of the experiment or at regular intervals within it.

To indicate to PsychoPy that the value represents a variable or python code, rather than literal text, it should be preceded by a $. For example, inserting intensity into the text field of the Text Component will cause that word literally to be presented, whereas $intensity will cause python to search for the variable called intensity in the script.

Variables associated with Loops can also be entered in this way (see Accessing loop parameters from components for further details). But it can also be used to evaluate arbitrary python code.

For example:

  • $random(2)

    will generate a pair of random numbers

  • $”yn”[randint(2)]

    will randomly choose the first or second character (y or n)

  • $globalClock.getTime()

    will insert the current time in secs of the globalClock object

  • $[sin(angle), cos(angle)]

    will insert the sin and cos of an angle (e.g. into the x,y coords of a stimulus)

How often to evaluate the variable/code

If you do want the parameters of a stimulus to be evaluated by code in this way you need also to decide how often it should be updated. By default, the parameters of Components are set to be constant; the parameter will be set at the beginning of the experiment and will remain that way for the duration. Alternatively, they can be set to change either on every repeat in which case the parameter will be set at the beginning of the Routine on each repeat of it. Lastly many parameters can even be set on every frame, allowing them to change constantly on every refresh of the screen.

Experiment settings

The settings menu can be accessed by clicking the icon at the top of the window. It allows the user to set various aspects of the experiment, such as the size of the window to be used or what information is gathered about the subject and determine what outputs (data files) will be generated.

Settings
Basic settings
Experiment name:
A name that will be stored in the metadata of the data file.
Show info dlg:
If this box is checked then a dialog will appear at the beginning of the experiment allowing the Experiment Info to be changed.
Experiment Info:
This information will be presented in a dialog box at the start and will be saved with any data files and so can be used for storing information about the current run of the study. The information stored here can also be used within the experiment. For example, if the Experiment Info included a field called ori then Builder Components could access expInfo[‘ori’] to retrieve the orientation set here. Obviously this is a useful way to run essentially the same experiment, but with different conditions set at run-time.
Enable escape:
If ticked then the Esc key can be used to exit the experiment at any time (even without a keyboard component)
Data settings
Data filename: (new in version 1.80.00):

A formatted string to control the base filename and path, often based on variables such as the date and/or the participant. This base filename will be given the various extensions for the different file types as needed. Examples:

# all in data folder: data/JWP_memoryTask_2014_Feb_15_1648
'data/%s_%s_%s' %(expInfo['participant'], expName, expInfo['date'])

# group by participant folder: data/JWP/memoryTask-2014_Feb_15_1648
'data/%s/%s-%s' %(expInfo['participant'], expName, expInfo['date'])

# put into dropbox: ~/dropbox/data/memoryTask/JWP-2014_Feb_15_1648
# on Windows you may need to replace ~ with your home directory
'~/dropbox/data/%s/%s-%s' %(expName, expInfo['participant'], expInfo['date'])
Save Excel file:
If this box is checked an Excel data file (.xlsx) will be stored.
Save csv file:
If this box is checked a comma separated variable (.csv) will be stored.
Save psydat file:
If this box is checked a PsychoPy data file (.psydat) will be stored. This is a Python specific format (.pickle files) which contains more information that .xlsx or .csv files that can be used with data analysis and plotting scripts written in Python. Whilst you may not wish to use this format it is recommended that you always save a copy as it contains a complete record of the experiment at the time of data collection.
Save log file
A log file provides a record of what occurred during the experiment in chronological order, including information about any errors or warnings that may have occurred.
Logging level
How much detail do you want to be output to the log file, if it is being saved. The lowest level is error, which only outputs error messages; warning outputs warnings and errors; info outputs all info, warnings and errors; debug outputs all info that can be logged. This system enables the user to get a great deal of information while generating their experiments, but then reducing this easily to just the critical information needed when actually running the study. If your experiment is not behaving as you expect it to, this is an excellent place to begin to work out what the problem is.
Screen settings
Monitor
The name of the monitor calibration. Must match one of the monitor names from Monitor Center.
Screen:
If multiple screens are available (and if the graphics card is not an intel integrated graphics chip) then the user can choose which screen they use (e.g. 1 or 2).
Full-screen window:
If this box is checked then the experiment window will fill the screen (overriding the window size setting and using the size that the screen is currently set to in the operating system settings).
Window size:
The size of the window in pixels, if this is not to be a full-screen window.
Units
The default units of the window (see Units for the window and stimuli). These can be overridden by individual Components.

Defining the onset/duration of components

As of version 1.70.00, the onset and offset times of stimuli can be defined in several ways.

Start and stop times can be entered in terms of seconds (time (s)), by frame number (frameN) or in relation to another stimulus (condition). Condition would be used to make Components start or stop depending on the status of something else, for example when a sound has finished. Duration can also be varied using a Code Component.

If you need very precise timing (particularly for very brief stimuli for instance) then it is best to control your onset/duration by specifying the number of frames the stimulus will be presented for.

Measuring duration in seconds (or milliseconds) is not very precise because it doesn’t take into account the fact that your monitor has a fixed frame rate. For example if the screen has a refresh rate of 60Hz you cannot present your stimulus for 120ms; the frame rate would limit you to 116.7ms (7 frames) or 133.3ms (8 frames). The duration of a frame (in seconds) is simply 1/refresh rate in Hz.

Condition would be used to make Components start or stop depending on the status of something else, for example when a movie has finished. Duration can also be varied using a code component.

In cases where PsychoPy cannot determine the start/endpoint of your Component (e.g. because it is a variable) you can enter an ‘Expected’ start/duration. This simply allows components with variable durations to be drawn in the Routine window. If you do not enter the approximate duration it will not be drawn, but this will not affect experimental performance.

For more details of how to achieve good temporal precision see Timing Issues and synchronisation

Examples
  • Use time(s) or frameN and simply enter numeric values into the start and duration boxes.
  • Use time(s) or frameN and enter a numeric value into the start time and set the duration to a variable name by preceeding it with a $ as described here. Then set expected time to see an approximation in your routine
  • Use condition to cause the stimulus to start immediately after a movie component called myMovie, by entering $myMovie.status==FINISHED into the start time.

Generating outputs (datafiles)

There are 4 main forms of output file from PsychoPy:

Common Mistakes (aka Gotcha’s)

General Advice
  • Python and therefore PsychoPy is CASE SENSITIVE
  • To use a dollar sign ($) for anything other than to indicate a code snippet for example in a Text Component, precede it with a backslash \$ (the backslash won’t be printed)
  • Have you entered your the settings for your monitor? If you are using degrees as a unit of measurement and have not entered your monitor settings, the size of stimuli will not be accurate.
  • If your experiment is not behaving in the way that you expect. Have you looked at the log file? This can point you in the right direction ? Did you know you can change the type of information that is stored in the log file in preferences by changing the logging level.
  • Have you tried compiling the script and running it. Does this produce a particular error message that points you at a particular problem area? You can also change things in a more detailed way in the coder view and if you are having problems, reading through the script can highlight problems. Reading a compiled script can also help with the creation of a Code Component
My stimulus isn’t appearing, there’s only the grey background
  • Have you checked the size of your stimulus? If it is 0.5x0.5 pixels you won’t be able to see it!
  • Have you checked the position of your stimulus? Is it positioned off the screen?
The loop isn’t using my Excel spreadsheet
  • Have you remembered to specify the file you want to use when setting up the loop?
  • Have you remembered to add the variables proceeded by the $ symbol to your stimuli?
I just want a plain square, but it’s turning into a grating
  • If you don’t want your stimulus to have a texture, you need Image to be None
The code snippet I’ve entered doesn’t do anything
  • Have you remembered to put a $ symbol at the beginning (this isn’t necessary, and should be avoided in a Code Component)?
  • A dollar sign as the first character of a line indicates to PsychoPy that the rest of the line is code. It does not indicate a variable name (unlike in perl or php). This means that if you are, for example, using variables to determine position, enter $[x,y]. The temptation is to use [$x,$y], which will not work.
My stimulus isn’t changing as I progress through the loop
  • Have you changed the setting for the variable that you want to change to ‘change every repeat’ (or ‘change every frame’)?
I’m getting the error message AttributeError: ‘unicode object has no attribute ‘XXXX’
  • This type of error is usually caused by a naming conflict. Whilst we have made every attempt to make sure that these conflicts produce a warning message it is possible that they may still occur.
  • The most common source of naming conflicts in an external file which has been imported to be used in a loop i.e. .xlsx, .csv.
  • Check to make sure that all of the variable names are unique. There can be no repeated variable names anywhere in your experiment.
The window opens and immediately closes
  • Have you checked all of your variable entries are accepted commands e.g. gauss but not Gauss
  • If you compile your experiment and run it from the coder window what does the error message say? Does it point you towards a particular variable which may be incorrectly formatted?

If you are having problems getting the application to run please see Troubleshooting

Compiling a Script

If you click the compile script icon this will display the script for your experiment in the Coder window.

This can be used for debugging experiments, entering small amounts of code and learning a bit about writing scripts amongst other things.

The code is fully commented and so this can be an excellent introduction to writing your own code.

Set up your monitor properly

It’s a really good idea to tell PsychoPy about the set up of your monitor, especially the size in cm and pixels and its distance, so that PsychoPy can present your stimuli in units that will be consistent in another lab with a different set up (e.g. cm or degrees of visual angle).

You should do this in Monitor Center which can be opened from Builder by clicking on the icon that shows two monitors. In Monitor Center you can create settings for multiple configurations, e.g. different viewing distances or different physical devices and then select the appropriate one by name in your experiments or scripts.

Having set up your monitor settings you should then tell PsychoPy which of your monitor setups to use for this experiment by going to the Experiment settings dialog.

Future developments

The builder view still has a few rough edges, but is hopefully fairly usable. Here are some of the ways I hope it will improve:

  • More components. Several of the stimuli and events that PsychoPy can handle don’t currently show up as components in the builder view, but they can be added easily (take a look inside the components directory to see how easy it is to create a component).
  • Dialogue entry validation. Dialogue boxes currently allow you to type almost anything into their windows. The only current checking is that a name is given to the component and that this is unique. More checking is needed to reduce errors.
  • Similar to the above, I hope to add suggested entries to go into dialogs, as a form of help. e.g. on right-clicking an entry box, say for stimulus orientation, a context menu should appear with ideas including numeric values, known local variables (e.g. “thisTrial.rgb”, based on the existing loops in the Flow) and global variable ideas (e.g. “frameN*360”)
  • Better code output. I hope that the builder output code will illustrate best practice for precise timing and stimulus presentation (it will probably always take more lines than a man-made script, but it should be at least as precise). At the moment that isn’t the case. e.g. The builder should strongly recommend an interval between trials where only static stimuli are drawn (e.g. fixation) and update components for this trial in that interval.

Coder

Note

These do not teach you about Python per se, and you are recommended also to learn about that (Python has many excellent tutorials for programmers and non-programmers alike). In particular, dictionaries, lists and numpy arrays are used a great deal in most PsychoPy experiments.

You can learn to use the scripting interface to PsychoPy in several ways, and you should probably follow a combination of them:

  • Basic Concepts: some of the logic of PsychoPy scripting
  • PsychoPy Tutorials: walk you through the development of some semi-complete experiments
  • demos: in the demos menu of Coder view. Many and varied
  • use the Builder to compile a script and see how it works
  • check the Reference Manual (API) for further details
  • ultimately go into PsychoPy and start examining the source code. It’s just regular python!

Basic Concepts

Presenting Stimuli

Note

Before you start, tell PsychoPy about your monitor(s) using the Monitor Center. That way you get to use units (like degrees of visual angle) that will transfer easily to other computers.

Stimulus objects

Python is an ‘object-oriented’ programming language, meaning that most stimuli in PsychoPy are represented by python objects, with various associated methods and information.

Typically you should create your stimulus once, at the beginning of the script, and then change it as you need to later using set____() commands. For instance, create your text and then change its color any time you like:

from psychopy import visual, core
win = visual.Window([400,400])
message = visual.TextStim(win, text='hello')
message.setAutoDraw(True)  # automatically draw every frame
win.flip()
core.wait(2.0)
message.setText('world')  # change properties of existing stim
win.flip()
core.wait(2.0)
Setting stimulus attributes
Stimulus attributes are typically set using either
  • a string, which is just some characters (as message.setText(‘world’) above)
  • a scalar (a number; see below)
  • an x,y-pair (two numbers; see below)
x,y-pair:

PsychoPy is very flexible in terms of input. You can specify the widely used x,y-pairs using these types:

  • A Tuple (x, y) with two elements
  • A List [x, y] with two elements
  • A numpy array([x, y]) with two elements

However, PsychoPy always converts the x,y-pairs to numpy arrays internally. For example, all three assignments of pos are equivalent here:

stim.pos = (0.5, -0.2)  # Right and a bit up from the center
print stim.pos  # array([0.5, -0.2])

stim.pos = [0.5, -0.2]
print stim.pos  # array([0.5, -0.2])

stim.pos = numpy.array([0.5, -0.2])
print stim.pos  # array([0.5, -0.2])

Choose your favorite :-) However, you can’t assign elementwise:

stim.pos[1] = 4  # has no effect
Scalar:

Int or Float.

Mostly, scalars are no-brainers to understand. E.g.:

stim.ori = 90  # Rotate stimulus 90 degrees
stim.opacity = 0.8  # Make the stimulus slightly transparent.

However, scalars can also be used to assign x,y-pairs. In that case, both x and y get the value of the scalar. E.g.:

stim.size = 0.5
print stim.size  # array([0.5, 0.5])
Operations on attributes:

Operations during assignment of attributes are a handy way to smoothly alter the appearance of your stimuli in loops.

Most scalars and x,y-pairs support the basic operations:

stim.attribute += value  # addition
stim.attribute -= value  # subtraction
stim.attribute *= value  # multiplication
stim.attribute /= value  # division
stim.attribute %= value  # modulus
stim.attribute **= value # power

They are easy to use and understand on scalars:

stim.ori = 5     # 5.0, set rotation
stim.ori += 3.8  # 8.8, rotate clockwise
stim.ori -= 0.8  # 8.0, rotate counterclockwise
stim.ori /= 2    # 4.0, home in on zero
stim.ori **= 3   # 64.0, exponential increase in rotation
stim.ori %= 10   # 4.0, modulus 10

However, they can also be used on x,y-pairs in very flexible ways. Here you can use both scalars and x,y-pairs as operators. In the latter case, the operations are element-wise:

stim.size = 5           # array([5.0, 5.0]), set quadratic size
stim.size +=2           # array([7.0, 7.0]), increase size
stim.size /= 2          # array([3.5, 3.5]), downscale size
stim.size += (0.5, 2.5) # array([4.0, 6.0]), a little wider and much taller
stim.size *= (2, 0.25)  # array([8.0, 1.5]), upscale horizontal and downscale vertical

Operations are not meaningful for strings.

Timing
There are various ways to measure and control timing in PsychoPy:
  • using frame refresh periods (most accurate, least obvious)
  • checking the time on Clock objects
  • using core.wait() commands (most obvious, least flexible/accurate)

Using core.wait(), as in the above example, is clear and intuitive in your script. But it can’t be used while something is changing. For more flexible timing, you could use a Clock() object from the core module:

from psychopy import visual, core

#setup stimulus
win=visual.Window([400,400])
gabor = visual.GratingStim(win, tex='sin', mask='gauss', sf=5, name='gabor')
gabor.setAutoDraw(True)  # automatically draw every frame
gabor.autoLog=False#or we'll get many messages about phase change

clock = core.Clock()
#let's draw a stimulus for 2s, drifting for middle 0.5s
while clock.getTime() < 2.0:  # clock times are in seconds
    if 0.5 <= clock.getTime() < 1.0:
        gabor.setPhase(0.1, '+')  # increment by 10th of cycle
    win.flip()

Clocks are accurate to around 1ms (better on some platforms), but using them to time stimuli is not very accurate because it fails to account for the fact that one frame on your monitor has a fixed frame rate. In the above, the stimulus does not actually get drawn for exactly 0.5s (500ms). If the screen is refreshing at 60Hz (16.7ms per frame) and the getTime() call reports that the time has reached 1.999s, then the stimulus will draw again for a frame, in accordance with the while loop statement and will ultimately be displayed for 2.0167s. Alternatively, if the time has reached 2.001s, there will not be an extra frame drawn. So using this method you get timing accurate to the nearest frame period but with little consistent precision. An error of 16.7ms might be acceptable to long-duration stimuli, but not to a brief presentation. It also might also give the false impression that a stimulus can be presented for any given period. At 60Hz refresh you can not present your stimulus for, say, 120ms; the frame period would limit you to a period of 116.7ms (7 frames) or 133.3ms (8 frames).

As a result, the most precise way to control stimulus timing is to present them for a specified number of frames. The frame rate is extremely precise, much better than ms-precision. Calls to Window.flip() will be synchronised to the frame refresh; the script will not continue until the flip has occurred. As a result, on most cards, as long as frames are not being ‘dropped’ (see Detecting dropped frames) you can present stimuli for a fixed, reproducible period.

Note

Some graphics cards, such as Intel GMA graphics chips under win32, don’t support frame sync. Avoid integrated graphics for experiment computers wherever possible.

Using the concept of fixed frame periods and flip() calls that sync to those periods we can time stimulus presentation extremely precisely with the following:

from psychopy import visual, core

#setup stimulus
win=visual.Window([400,400])
gabor = visual.GratingStim(win, tex='sin', mask='gauss', sf=5,
    name='gabor', autoLog=False)
fixation = visual.GratingStim(win, tex=None, mask='gauss', sf=0, size=0.02,
    name='fixation', autoLog=False)

clock = core.Clock()
#let's draw a stimulus for 2s, drifting for middle 0.5s
for frameN in range(200):#for exactly 200 frames
    if 10 <= frameN < 150:  # present fixation for a subset of frames
        fixation.draw()
    if 50 <= frameN < 100:  # present stim for a different subset
        gabor.setPhase(0.1, '+')  # increment by 10th of cycle
        gabor.draw()
    win.flip()
Using autoDraw

Stimuli are typically drawn manually on every frame in which they are needed, using the draw() function. You can also set any stimulus to start drawing every frame using setAutoDraw(True) or setAutoDraw(False). If you use these commands on stimuli that also have autoLog=True, then these functions will also generate a log message on the frame when the first drawing occurs and on the first frame when it is confirmed to have ended.

Logging data

TrialHandler and StairHandler can both generate data outputs in which responses are stored, in relation to the stimulus conditions. In addition to those data outputs, PsychoPy can created detailed chronological log files of events during the experiment.

Log levels and targets
Log messages have various levels of severity:
ERROR, WARNING, DATA, EXP, INFO and DEBUG

Multiple targets can also be created to receive log messages. Each target has a particular critical level and receives all logged messages greater than that. For example, you could set the console (visual output) to receive only warnings and errors, have a central log file that you use to store warning messages across studies (with file mode append), and another to create a detailed log of data and events within a single study with level=INFO:

from psychopy import logging
logging.console.setLevel(logging.WARNING)
#overwrite (mode='w') a detailed log of the last run in this dir
lastLog=logging.LogFile("lastRun.log", level=logging.INFO, mode='w')
#also append warnings to a central log file
centralLog=logging.LogFile("c:/psychopyExps.log", level=logging.WARNING, mode='a')
Updating the logs

For performance purposes log files are not actually written when the log commands are ‘sent’. They are stored in a list and processed automatically when the script ends. You might also choose to force a flush of the logged messages manually during the experiment (e.g. during an inter-trial interval):

from psychopy import logging

...

logging.flush()#write messages out to all targets

This should only be necessary if you want to see the logged information as the experiment progresses.

AutoLogging

New in version 1.63.00

Certain events will log themselves automatically by default. For instance, visual stimuli send log messages every time one of their parameters is changed, and when autoDraw is toggled they send a message that the stimulus has started/stopped. All such log messages are timestamped with the frame flip on which they take effect. To avoid this logging, for stimuli such as fixation points that might not be critical to your analyses, or for stimuli that change constantly and will flood the logging system with messages, the autoLogging can be turned on/off at initialisation of the stimulus and can be altered afterwards with .setAutoLog(True/False)

Manual methods

In addition to a variety of automatic logging messages, you can create your own, of various levels. These can be timestamped immediately:

from psychopy import logging
logging.log(level=logging.WARN, msg='something important')
logging.log(level=logging.EXP, msg='something about the conditions')
logging.log(level=logging.DATA, msg='something about a response')
logging.log(level=logging.INFO, msg='something less important')

There are additional convenience functions for the above: logging.warn(‘a warning’) etc.

For stimulus changes you probably want the log message to be timestamped based on the frame flip (when the stimulus is next presented) rather than the time that the log message is sent:

from psychopy import logging, visual
win = visual.Window([400,400])
win.flip()
logging.log(level=logging.EXP, msg='sent immediately')
win.logOnFlip(level=logging.EXP, msg='sent on actual flip')
win.flip()
Using a custom clock for logs

New in version 1.63.00

By default times for log files are reported as seconds after the very beginning of the script (often it takes a few seconds to initialise and import all modules too). You can set the logging system to use any given core.Clock object (actually, anything with a getTime() method):

from psychopy import core, logging
globalClock=core.Clock()
logging.setDefaultClock(globalClock)
Handling Trials and Conditions
TrialHandler

This is what underlies the random and sequential loop types in Builder, they work using the method of constants. The trialHandler presents a predetermined list of conditions in either a sequential or random (without replacement) order.

see TrialHandler for more details.

StairHandler

This generates the next trial using an adaptive staircase. The conditions are not predetermined and are generated based on the participant’s responses.

Staircases are predominately used in psychophysics to measure the discrimination and detection thresholds. However they can be used in any experiment which varies a numeric value as a result of a 2 alternative forced choice (2AFC) response.

The StairHandler systematically generates numbers based on staircase parameters. These can then be used to define a stimulus parameter e.g. spatial frequency, stimulus presentation duration. If the participant gives the incorrect response the number generated will get larger and if the participant gives the correct response the number will get smaller.

see StairHandler for more details

PsychoPy Tutorials

Tutorial 1: Generating your first stimulus

A tutorial to get you going with your first stimulus display.

Know your monitor

PsychoPy has been designed to handle your screen calibrations for you. It is also designed to operate (if possible) in the final experimental units that you like to use e.g. degrees of visual angle.

In order to do this PsychoPy needs to know a little about your monitor. There is a GUI to help with this (select MonitorCenter from the tools menu of PsychoPyIDE or run ...site-packages/monitors/MonitorCenter.py).

In the MonitorCenter window you can create a new monitor name, insert values that describe your monitor and run calibrations like gamma corrections. For now you can just stick to the [testMonitor] but give it correct values for your screen size in number of pixels and width in cm.

Now, when you create a window on your monitor you can give it the name ‘testMonitor’ and stimuli will know how they should be scaled appropriately.

Your first stimulus

Building stimuli is extremely easy. All you need to do is create a Window, then some stimuli. Draw those stimuli, then update the window. PsychoPy has various other useful commands to help with timing too. Here’s an example. Type it into a coder window, save it somewhere and press run.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
from psychopy import visual, core  # import some libraries from PsychoPy

#create a window
mywin = visual.Window([800,600], monitor="testMonitor", units="deg")

#create some stimuli
grating = visual.GratingStim(win=mywin, mask="circle", size=3, pos=[-4,0], sf=3)
fixation = visual.GratingStim(win=mywin, size=0.5, pos=[0,0], sf=0, rgb=-1)

#draw the stimuli and update the window
grating.draw()
fixation.draw()
mywin.update()

#pause, so you get a chance to see it!
core.wait(5.0)

Note

For those new to Python. Did you notice that the grating and the fixation stimuli both call GratingStim but have different arguments? One of the nice features about python is that you can select which arguments to set. GratingStim has over 15 arguments that can be set, but the others just take on default values if they aren’t needed.

That’s a bit easy though. Let’s make the stimulus move, at least! To do that we need to create a loop where we change the phase (or orientation, or position...) of the stimulus and then redraw. Add this code in place of the drawing code above:

for frameN in range(200):
    grating.setPhase(0.05, '+')  # advance phase by 0.05 of a cycle
    grating.draw()
    fixation.draw()
    mywin.update()

That ran for 200 frames (and then waited 5 seconds as well). Maybe it would be nicer to keep updating until the user hits a key instead. That’s easy to add too. In the first line add event to the list of modules you’ll import. Then replace the line:

for frameN in range(200):

with the line:

while True: #this creates a never-ending loop

Then, within the loop (make sure it has the same indentation as the other lines) add the lines:

    if len(event.getKeys())>0: break
    event.clearEvents()

the first line counts how many keys have been pressed since the last frame. If more than zero are found then we break out of the never-ending loop. The second line clears the event buffer and should always be called after you’ve collected the events you want (otherwise it gets full of events that we don’t care about like the mouse moving around etc...).

Your finished script should look something like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
from psychopy import visual, core, event #import some libraries from PsychoPy

#create a window
mywin = visual.Window([800,600],monitor="testMonitor", units="deg")

#create some stimuli
grating = visual.GratingStim(win=mywin, mask='circle', size=3, pos=[-4,0], sf=3)
fixation = visual.GratingStim(win=mywin, size=0.2, pos=[0,0], sf=0, rgb=-1)

#draw the stimuli and update the window
while True: #this creates a never-ending loop
    grating.setPhase(0.05, '+')#advance phase by 0.05 of a cycle
    grating.draw()
    fixation.draw()
    mywin.flip()

    if len(event.getKeys())>0: break
    event.clearEvents()

#cleanup
mywin.close()
core.quit()

There are several more simple scripts like this in the demos menu of the Coder and Builder views and many more to download. If you’re feeling like something bigger then go to Tutorial 2: Measuring a JND using a staircase procedure which will show you how to build an actual experiment.

Tutorial 2: Measuring a JND using a staircase procedure

This tutorial builds an experiment to test your just-noticeable-difference (JND) to orientation, that is it determines the smallest angular deviation that is needed for you to detect that a gabor stimulus isn’t vertical (or at some other reference orientation). The method presents a pair of stimuli at once with the observer having to report with a key press whether the left or the right stimulus was at the reference orientation (e.g. vertical).

You can download the full code here. Note that the entire experiment is constructed of less than 100 lines of code, including the initial presentation of a dialogue for parameters, generation and presentation of stimuli, running the trials, saving data and outputting a simple summary analysis for feedback. Not bad, eh?

There are a great many modifications that can be made to this code, however this example is designed to demonstrate how much can be achieved with very simple code. Modifying existing is an excellent way to begin writing your own scripts, for example you may want to try changing the appearance of the text or the stimuli.

Get info from the user

The first lines of code import the necessary libraries. We need lots of the psychopy components for a full experiment, as well as python’s time library (to get the current date) and numpy (which handles various numerical/mathematical functions):

from psychopy import core, visual, gui, data, event
from psychopy.tools.filetools import fromFile, toFile

The try:...except:... lines allow us to try and load a parameter file from a previous run of the experiment. If that fails (e.g. because the experiment has never been run) then create a default set of parameters. These are easy to store in a python dictionary that we’ll call expInfo:

try:#try to get a previous parameters file
    expInfo = fromFile('lastParams.pickle')
except:#if not there then use a default set
    expInfo = {'observer':'jwp', 'refOrientation':0}

The last line adds the current date to whichever method was used.

So having loaded those parameters, let’s allow the user to change them in a dialogue box (which we’ll call dlg). This is the simplest form of dialogue, created directly from the dictionary above. the dialogue will be presented immediately to the user and the script will wait until they hit OK or Cancel.

If they hit OK then dlg.OK=True, in which case we’ll use the updated values and save them straight to a parameters file (the one we try to load above).

If they hit Cancel then we’ll simply quit the script and not save the values.

#present a dialogue to change params
dlg = gui.DlgFromDict(expInfo, title='simple JND Exp', fixed=['dateStr'])
if dlg.OK:
    toFile('lastParams.pickle', expInfo)#save params to file for next time
else:
Setup the information for trials

We’ll create a file to which we can output some data as text during each trial (as well as outputting a binary file at the end of the experiment). We’ll create a filename from the subject+date+”.csv” (note how easy it is to concatenate strings in python just by ‘adding’ them). csv files can be opened in most spreadsheet packages. Having opened a text file for writing, the last line shows how easy it is to send text to this target document.

#make a text file to save data
fileName = expInfo['observer'] + expInfo['dateStr']
dataFile = open(fileName+'.csv', 'w')#a simple text file with 'comma-separated-values'

PsychoPy allows us to set up an object to handle the presentation of stimuli in a staircase procedure, the StairHandler. This will define the increment of the orientation (i.e. how far it is from the reference orientation). The staircase can be configured in many ways, but we’ll set it up to begin with an increment of 20deg (very detectable) and home in on the 80% threshold value. We’ll step up our increment every time the subject gets a wrong answer and step down if they get three right answers in a row. The step size will also decrease after every 2 reversals, starting with an 8dB step (large) and going down to 1dB steps (smallish). We’ll finish after 50 trials.

#create the staircase handler
staircase = data.StairHandler(startVal = 20.0,
                          stepType = 'db', stepSizes=[8,4,4,2,2,1,1],
                          nUp=1, nDown=3,  #will home in on the 80% threshold
Build your stimuli

Now we need to create a window, some stimuli and timers. We need a ~psychopy.visual.Window in which to draw our stimuli, a fixation point and two ~psychopy.visual.GratingStim stimuli (one for the target probe and one as the foil). We can have as many timers as we like and reset them at any time during the experiment, but I generally use one to measure the time since the experiment started and another that I reset at the beginning of each trial.

#create window and stimuli
win = visual.Window([800,600],allowGUI=True, monitor='testMonitor', units='deg')
foil = visual.GratingStim(win, sf=1, size=4, mask='gauss', ori=expInfo['refOrientation'])
target = visual.GratingStim(win, sf=1, size=4, mask='gauss', ori=expInfo['refOrientation'])
fixation = visual.GratingStim(win, color=-1, colorSpace='rgb', tex=None, mask='circle',size=0.2)
#and some handy clocks to keep track of time
globalClock = core.Clock()

Once the stimuli are created we should give the subject a message asking if they’re ready. The next two lines create a pair of messages, then draw them into the screen and then update the screen to show what we’ve drawn. Finally we issue the command event.waitKeys() which will wait for a keypress before continuing.

#display instructions and wait
message1 = visual.TextStim(win, pos=[0,+3],text='Hit a key when ready.')
message2 = visual.TextStim(win, pos=[0,-3],
    text="Then press left or right to identify the %.1f deg probe." %expInfo['refOrientation'])
message1.draw()
message2.draw()
fixation.draw()
win.flip()#to show our newly drawn 'stimuli'
#pause until there's a keypress
Control the presentation of the stimuli

OK, so we have everything that we need to run the experiment. The following uses a for-loop that will iterate over trials in the experiment. With each pass through the loop the staircase object will provide the new value for the intensity (which we will call thisIncrement). We will randomly choose a side to present the target stimulus using numpy.random.random(), setting the position of the target to be there and the foil to be on the other side of the fixation point.

for thisIncrement in staircase: #will step through the staircase
    #set location of stimuli
    targetSide= random.choice([-1,1]) #will be either +1(right) or -1(left)
    foil.setPos([-5*targetSide, 0])

Then set the orientation of the foil to be the reference orientation plus thisIncrement, draw all the stimuli (including the fixation point) and update the window.

    #set orientation of probe
    foil.setOri(expInfo['refOrientation'] + thisIncrement)

    #draw all stimuli
    foil.draw()
    target.draw()
    fixation.draw()

Wait for presentation time of 500ms and then blank the screen (by updating the screen after drawing just the fixation point).

    core.wait(0.5) #wait 500ms; but use a loop of x frames for more accurate timing in fullscreen
                              # eg, to get 30 frames: for f in xrange(30): win.flip()
    #blank screen
    fixation.draw()
Get input from the subject

Still within the for-loop (note the level of indentation is the same) we need to get the response from the subject. The method works by starting off assuming that there hasn’t yet been a response and then waiting for a key press. For each key pressed we check if the answer was correct or incorrect and assign the response appropriately, which ends the trial. We always have to clear the event buffer if we’re checking for key presses like this

    #get response
    thisResp=None
    while thisResp==None:
        allKeys=event.waitKeys()
        for thisKey in allKeys:
            if thisKey=='left':
                if targetSide==-1: thisResp = 1#correct
                else: thisResp = -1             #incorrect
            elif thisKey=='right':
                if targetSide== 1: thisResp = 1#correct
                else: thisResp = -1             #incorrect
            elif thisKey in ['q', 'escape']:
                core.quit() #abort experiment

Now we must tell the staircase the result of this trial with its addData() method. Then it can work out whether the next trial is an increment or decrement. Also, on each trial (so still within the for-loop) we may as well save the data as a line of text in that .csv file we created earlier.

    #add the data to the staircase so it can calculate the next level
    staircase.addData(thisResp)
    dataFile.write('%i,%.3f,%i\n' %(targetSide, thisIncrement, thisResp))
Output your data and clean up

OK! We’re basically done! We’ve reached the end of the for-loop (which occurred because the staircase terminated) which means the trials are over. The next step is to close the text data file and also save the staircase as a binary file (by ‘pickling’ the file in Python speak) which maintains a lot more info than we were saving in the text file.

#staircase has ended
dataFile.close()

While we’re here, it’s quite nice to give some immediate feedback to the user. Let’s tell them the intensity values at the all the reversals and give them the mean of the last 6. This is an easy way to get an estimate of the threshold, but we might be able to do a better job by trying to reconstruct the psychometric function. To give that a try see the staircase analysis script of Tutorial 3.

Having saved the data you can give your participant some feedback and quit!

staircase.saveAsPickle(fileName) #special python binary file to save all the info

#give some output to user in the command line in the output window
print 'reversals:'
print staircase.reversalIntensities
print 'mean of final 6 reversals = %.3f' %(numpy.average(staircase.reversalIntensities[-6:]))

#give some on screen feedback
feedback1 = visual.TextStim(win, pos=[0,+3],
    text='mean of final 6 reversals = %.3f' %
(numpy.average(staircase.reversalIntensities[-6:])))
feedback1.draw()
fixation.draw()
win.flip()
event.waitKeys() #wait for participant to respond

win.close()
Tutorial 3: Analysing data in Python

You could simply output your data as tab- or comma-separated text files and analyse the data in some spreadsheet package. But the matplotlib library in Python also allows for very neat and simple creation of publication-quality plots.

This script shows you how to use a couple of functions from PsychoPy to open some data files (psychopy.gui.fileOpenDlg()) and create a psychometric function out of some staircase data (psychopy.data.functionFromStaircase()).

Matplotlib is then used to plot the data.

Note

Matplotlib and pylab. Matplotlib is a python library that has similar command syntax to most of the plotting functions in Matlab(tm). In can be imported in different ways; the import pylab line at the beginning of the script is the way to import matploblib as well as a variety of other scientific tools (that aren’t strictly to do with plotting per se).

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
#This analysis script takes one or more staircase datafiles as input
#from a GUI. It then plots the staircases on top of each other on 
#the left and a combined psychometric function from the same data
#on the right

from psychopy import data, gui, core
from psychopy.tools.filetools import fromFile
import pylab

#Open a dialog box to select files from
files = gui.fileOpenDlg('.')
if not files:
    core.quit()

#get the data from all the files
allIntensities, allResponses = [],[]
for thisFileName in files:
    thisDat = fromFile(thisFileName)
    allIntensities.append( thisDat.intensities )
    allResponses.append( thisDat.data )
    
#plot each staircase
pylab.subplot(121)
colors = 'brgkcmbrgkcm'
lines, names = [],[]
for fileN, thisStair in enumerate(allIntensities):
    #lines.extend(pylab.plot(thisStair))
    #names = files[fileN]
    pylab.plot(thisStair, label=files[fileN])
#pylab.legend()

#get combined data
combinedInten, combinedResp, combinedN = \
             data.functionFromStaircase(allIntensities, allResponses, 5)
#fit curve - in this case using a Weibull function
fit = data.FitFunction('weibullTAFC',combinedInten, combinedResp, \
guess=[0.2, 0.5])
smoothInt = pylab.arange(min(combinedInten), max(combinedInten), 0.001)
smoothResp = fit.eval(smoothInt)
thresh = fit.inverse(0.8)
print thresh

#plot curve
pylab.subplot(122)
pylab.plot(smoothInt, smoothResp, '-')
pylab.plot([thresh, thresh],[0,0.8],'--'); pylab.plot([0, thresh],\
[0.8,0.8],'--')
pylab.title('threshold = %0.3f' %(thresh))
#plot points
pylab.plot(combinedInten, combinedResp, 'o')
pylab.ylim([0,1])

pylab.show()

Reference Manual (API)

Contents:

psychopy.core - basic functions (clocks etc.)

Basic functions, including timing, rush (imported), quit

psychopy.core.getTime()

Get the current time since psychopy.core was loaded.

Version Notes: Note that prior to PsychoPy 1.77.00 the behaviour of getTime() was platform dependent (on OSX and linux it was equivalent to psychopy.core.getAbsTime() whereas on windows it returned time since loading of the module, as now)

psychopy.core.getAbsTime()

Return unix time (i.e., whole seconds elapsed since Jan 1, 1970).

This uses the same clock-base as the other timing features, like getTime(). The time (in seconds) ignores the time-zone (like time.time() on linux). To take the timezone into account, use int(time.mktime(time.gmtime())).

Absolute times in seconds are especially useful to add to generated file names for being unique, informative (= a meaningful time stamp), and because the resulting files will always sort as expected when sorted in chronological, alphabetical, or numerical order, regardless of locale and so on.

Version Notes: This method was added in PsychoPy 1.77.00

psychopy.core.wait(secs, hogCPUperiod=0.2)

Wait for a given time period.

If secs=10 and hogCPU=0.2 then for 9.8s python’s time.sleep function will be used, which is not especially precise, but allows the cpu to perform housekeeping. In the final hogCPUperiod the more precise method of constantly polling the clock is used for greater precision.

If you want to obtain key-presses during the wait, be sure to use pyglet and to hogCPU for the entire time, and then call psychopy.event.getKeys() after calling wait()

If you want to suppress checking for pyglet events during the wait, do this once::
core.checkPygletDuringWait = False
and from then on you can do::
core.wait(sec)

This will preserve terminal-window focus during command line usage.

class psychopy.core.Clock

A convenient class to keep track of time in your experiments. You can have as many independent clocks as you like (e.g. one to time responses, one to keep track of stimuli...)

This clock is identical to the MonotonicClock except that it can also be reset to 0 or another value at any point.

add(t)

Add more time to the clock’s ‘start’ time (t0).

Note that, by adding time to t0, you make the current time appear less. Can have the effect that getTime() returns a negative number that will gradually count back up to zero.

e.g.:

timer = core.Clock()
timer.add(5)
while timer.getTime()<0:
    #do something
reset(newT=0.0)

Reset the time on the clock. With no args time will be set to zero. If a float is received this will be the new time on the clock

class psychopy.core.CountdownTimer(start=0)

Similar to a Clock except that time counts down from the time of last reset

Typical usage:

timer = core.CountdownTimer(5)
while timer.getTime() > 0:  # after 5s will become negative
    #do stuff
getTime()

Returns the current time left on this timer in secs (sub-ms precision)

class psychopy.core.MonotonicClock(start_time=None)

A convenient class to keep track of time in your experiments using a sub-millisecond timer.

Unlike the Clock this cannot be reset to arbitrary times. For this clock t=0 always represents the time that the clock was created.

Don’t confuse this class with core.monotonicClock which is an instance of it that got created when PsychoPy.core was imported. That clock instance is deliberately designed always to return the time since the start of the study.

Version Notes: This class was added in PsychoPy 1.77.00

getLastResetTime()

Returns the current offset being applied to the high resolution timebase used by Clock.

getTime()

Returns the current time on this clock in secs (sub-ms precision)

class psychopy.core.StaticPeriod(screenHz=None, win=None, name='StaticPeriod')

A class to help insert a timing period that includes code to be run.

Typical usage:

fixation.draw()
win.flip()
ISI = StaticPeriod(screenHz=60)
ISI.start(0.5) #start a period of 0.5s
stim.image = 'largeFile.bmp' #could take some time
ISI.complete() #finish the 0.5s, taking into account one 60Hz frame

stim.draw()
win.flip() #the period takes into account the next frame flip
#time should now be at exactly 0.5s later than when ISI.start() was called
Parameters:
  • screenHz – the frame rate of the monitor (leave as None if you don’t want this accounted for)
  • name – if a visual.Window is given then StaticPeriod will also pause/restart frame interval recording
  • name – give this StaticPeriod a name for more informative logging messages
complete()

Completes the period, using up whatever time is remaining with a call to wait()

Returns:1 for success, 0 for fail (the period overran)
start(duration)

Start the period. If this is called a second time, the timer will be reset and starts again

psychopy.visual - many visual stimuli

Window to display all stimuli below.

Aperture
BufferImageStim
Attributes
BufferImageStim
BufferImageStim.win
BufferImageStim.buffer
BufferImageStim.rect
BufferImageStim.stim
BufferImageStim.mask
BufferImageStim.units
BufferImageStim.sf
BufferImageStim.pos
BufferImageStim.ori
BufferImageStim.size
BufferImageStim.contrast
BufferImageStim.color
BufferImageStim.colorSpace
BufferImageStim.opacity
BufferImageStim.interpolate
BufferImageStim.name
BufferImageStim.autoLog
BufferImageStim.draw
BufferImageStim.autoDraw
Details
Circle
CustomMouse
DotStim
ElementArrayStim
GratingStim
Attributes
GratingStim
GratingStim.win
GratingStim.tex
GratingStim.mask
GratingStim.units
GratingStim.sf
GratingStim.pos
GratingStim.ori
GratingStim.size
GratingStim.contrast
GratingStim.color
GratingStim.colorSpace
GratingStim.opacity
GratingStim.interpolate
GratingStim.texRes
GratingStim.name
GratingStim.autoLog
GratingStim.draw
GratingStim.autoDraw
Details
Helper functions
ImageStim

As of PsychoPy version 1.79.00 some of the properties for this stimulus can be set using the syntax:

stim.pos = newPos

others need to be set with the older syntax:

stim.setImage(newImage)
Attributes
ImageStim
ImageStim.win
ImageStim.setImage
ImageStim.setMask
ImageStim.units
ImageStim.pos
ImageStim.ori
ImageStim.size
ImageStim.contrast
ImageStim.color
ImageStim.colorSpace
ImageStim.opacity
ImageStim.interpolate
ImageStim.contains
ImageStim.overlaps
ImageStim.name
ImageStim.autoLog
ImageStim.draw
ImageStim.autoDraw
ImageStim.clearTextures
Details
Line
MovieStim
Attributes
MovieStim
MovieStim.win
MovieStim.mask
MovieStim.units
MovieStim.pos
MovieStim.ori
MovieStim.size
MovieStim.opacity
MovieStim.name
MovieStim.autoLog
MovieStim.draw
MovieStim.autoDraw
MovieStim.loadMovie
MovieStim.play
MovieStim.seek
MovieStim.pause
MovieStim.stop
MovieStim.setFlipHoriz
MovieStim.setFlipVert
Details
PatchStim (deprecated)
Polygon
RadialStim
Attributes
RadialStim
RadialStim.win
RadialStim.tex
RadialStim.mask
RadialStim.units
RadialStim.pos
RadialStim.ori
RadialStim.size
RadialStim.contrast
RadialStim.color
RadialStim.colorSpace
RadialStim.opacity
RadialStim.interpolate
RadialStim.setAngularCycles
RadialStim.setAngularPhase
RadialStim.setRadialCycles
RadialStim.setRadialPhase
RadialStim.name
RadialStim.autoLog
RadialStim.draw
RadialStim.autoDraw
RadialStim.clearTextures
Details
RatingScale
Rect
ShapeStim
Attributes
ShapeStim
ShapeStim.win
ShapeStim.units
ShapeStim.vertices
ShapeStim.closeShape
ShapeStim.pos
ShapeStim.ori
ShapeStim.size
ShapeStim.contrast
ShapeStim.lineColor
ShapeStim.lineColorSpace
ShapeStim.fillColor
ShapeStim.fillColorSpace
ShapeStim.opacity
ShapeStim.interpolate
ShapeStim.name
ShapeStim.autoLog
ShapeStim.draw
ShapeStim.autoDraw
Details
SimpleImageStim
TextStim
Window
psychopy.visual.windowframepack - Pack multiple monochrome images into RGB frame
ProjectorFramePacker
psychopy.visual.windowwarp - warping to spherical, cylindrical, or other projections
Warper

Commonly used:

  • ImageStim to show images
  • TextStim to show texts

Shapes (all special classes of ShapeStim):

  • ShapeStim to draw shapes with arbitrary numbers of vertices
  • Rect to show rectangles
  • Circle to show circles
  • Polygon to show polygons
  • Line to show a line

Images and patterns:

  • ImageStim to show images
  • SimpleImageStim to show images without bells and whistles
  • GratingStim to show gratings
  • RadialStim to show annulus, a rotating wedge, a checkerboard etc

Multiple stimuli:

  • ElementArrayStim to show many stimuli of the same type
  • DotStim to show and control movement of dots

Other stimuli:

  • MovieStim to show movies
  • RatingScale to collect ratings
  • CustomMouse to change the cursor in windows with GUI. OBS: will be depricated soon

General purpose (applies to other stimuli):

  • BufferImageStim to make a faster-to-show “screenshot” of other stimuli
  • Aperture to restrict visibility area of other stimuli

See also Helper functions

psychopy.data - functions for storing/saving/analysing data

ExperimentHandler
TrialHandler
StairHandler
MultiStairHandler
QuestHandler
FitWeibull
FitLogistic
FitNakaRushton
FitCumNormal
importConditions()
functionFromStaircase()
bootStraps()

Encryption

Some labs may wish to better protect their data from casual inspection or accidental disclosure. This is possible within PsychoPy using a separate python package, pyFileSec, which grew out of PsychoPy. pyFileSec is distributed with the StandAlone versions of PsychoPy, or can be installed using pip or easy_install via https://pypi.python.org/pypi/PyFileSec/

Some elaboration of pyFileSec usage and security strategy can be found here: http://pythonhosted.org//PyFileSec

Basic usage is illustrated in the Coder demo > misc > encrypt_data.py

psychopy.event - for keypresses and mouse clicks

psychopy.filters - helper functions for creating filters

psychopy.gui - create dialogue boxes

DlgFromDict
Dlg
fileOpenDlg
fileSaveDlg

psychopy.hardware - hardware interfaces

PsychoPy can access a wide range of external hardware. For some devices the interface has already been created in the following sub-packages of PsychoPy. For others you may need to write the code to access the serial port etc. manually.

Contents:

Cedrus (response boxes)

The pyxid package, written by Cedrus, is included in the Standalone PsychoPy distributions. See https://github.com/cedrus-opensource/pyxid for further info.

Example usage:

    import pyxid

# get a list of all attached XID devices
devices = pyxid.get_xid_devices()

dev = devices[0] # get the first device to use
if dev.is_response_device():
    dev.reset_base_timer()
    dev.reset_rt_timer()

    while True:
        dev.poll_for_response()
        if dev.response_queue_size() > 0:
            response = dev.get_next_response()
            # do something with the response
Useful functions
Device classes
Cambridge Research Systems Ltd.
For stimulus display
BitsPlusPlus

Control a CRS Bits# device. See typical usage in the class summary (and in the menu demos>hardware>BitsBox of PsychoPy’s Coder view).

Important: See note on BitsPlusPlusIdentityLUT

Attributes
BitsPlusPlus
BitsPlusPlus.mode
BitsPlusPlus.setContrast
BitsPlusPlus.setGamma
BitsPlusPlus.setLUT
Details
Finding the identity LUT

For the Bits++ (and related) devices to work correctly it is essential that the graphics card is not altering in any way the values being passed to the monitor (e.g. by gamma correcting). It turns out that finding the ‘identity’ LUT, where exactly the same values come out as were put in, is not trivial. The obvious LUT would have something like 0/255, 1/255, 2/255... in entry locations 0,1,2... but unfortunately most graphics cards on most operating systems are ‘broken’ in one way or another, with rounding errors and incorrect start points etc.

PsychoPy provides a few of the common variants of LUT and that can be chosen when you initialise the device using the parameter rampType. If no rampType is specified then PsychoPy will choose one for you:

from psychopy import visual
from psychopy.hardware import crs

win = visual.Window([1024,768], useFBO=True) #we need to be rendering to framebuffer
bits = crs.BitsPlusPlus(win, mode = 'bits++', rampType = 1)

The Bits# is capable of reporting back the pixels in a line and this can be used to test that a particular LUT is indeed providing identity values. If you have previously connected a BitsSharp device and used it with PsychoPy then a file will have been stored with a LUT that has been tested with that device. In this case set rampType = “configFile” for PsychoPy to use it if such a file is found.

BitsSharp

Control a CRS Bits# device. See typical usage in the class summary (and in the menu demos>hardware>BitsBox of PsychoPy’s Coder view).

Attributes
BitsSharp
BitsSharp.mode
BitsSharp.isAwake
BitsSharp.getInfo
BitsSharp.checkConfig
BitsSharp.gammaCorrectFile
BitsSharp.temporalDithering
BitsSharp.monitorEDID
BitsSharp.beep
BitsSharp.getVideoLine
BitsSharp.start
BitsSharp.stop

Direct communications with the serial port:

BitsSharp.sendMessage
BitsSharp.getResponse

Control the CLUT (Bits++ mode only):

BitsSharp.setContrast
BitsSharp.setGamma
BitsSharp.setLUT
Details
For display calibration
ColorCAL
Attributes
ColorCAL
Details
egi (pynetstation)

Interface to EGI Netstation

This is currently a simple import of pynetstation That needs to be installed (but is included in the Standalone distributions of PsychoPy as of version 1.62.01).

installation:

Download the package from the link above and copy egi.py into your site-packages directory.

usage:

from psychopy.hardware import egi

For an example see the demos menu of the PsychoPy Coder For further documentation see the pynetstation website

Launch an fMRI experiment: Test or Scan
fORP response box
iolab
joystick (pyglet and pygame)
labjack (USB I/O devices)

The labjack package is included in the Standalone PsychoPy distributions. It differs slightly from the standard version distributed by labjack (www.labjack.com) in the import. For the custom distribution use:

from labjack import u3

NOT:

import u3

In all other respects the library is the same and instructions on how to use it can be found here:

http://labjack.com/support/labjackpython

Note

To use labjack devices you do need also to install the driver software described on the page above

Minolta

Minolta light-measuring devices See http://www.konicaminolta.com/instruments


class psychopy.hardware.minolta.LS100(port, maxAttempts=1)

A class to define a Minolta LS100 (or LS110?) photometer

You need to connect a LS100 to the serial (RS232) port and when you turn it on press the F key on the device. This will put it into the correct mode to communicate with the serial port.

usage:

from psychopy.hardware import minolta
phot = minolta.LS100(port)
if phot.OK:#then we successfully made a connection and can send/receive
    print phot.getLum()
Parameters:

port: string

the serial port that should be checked

maxAttempts: int

If the device doesn’t respond first time how many attempts should be made? If you’re certain that this is the correct port and the device is on and correctly configured then this could be set high. If not then set this low.

Troubleshooting:
 

Various messages are printed to the log regarding the function of this device, but to see them you need to set the printing of the log to the correct level:

from psychopy import logging
logging.console.setLevel(logging.ERROR)#error messages only
logging.console.setLevel(logging.INFO)#will give a little more info
logging.console.setLevel(logging.DEBUG)#will export a log of all communications

If you’re using a keyspan adapter (at least on OS X) be aware that it needs a driver installed. Otherwise no ports wil be found.

Error messages:

ERROR: Couldn't connect to Minolta LS100/110 on ____:

This likely means that the device is not connected to that port (although the port has been found and opened). Check that the device has the [ in the bottom right of the display; if not turn off and on again holding the F key.

ERROR: No reply from LS100:

The port was found, the connection was made and an initial command worked, but then the device stopped communating. If the first measurement taken with the device after connecting does not yield a reasonble intensity the device can sulk (not a technical term!). The “[” on the display will disappear and you can no longer communicate with the device. Turn it off and on again (with F depressed) and use a reasonably bright screen for your first measurement. Subsequent measurements can be dark (or we really would be in trouble!!).

checkOK(msg)

Check that the message from the photometer is OK. If there’s an error print it.

Then return True (OK) or False.

clearMemory()

Clear the memory of the device from previous measurements

getLum()

Makes a measurement and returns the luminance value

measure()

Measure the current luminance and set .lastLum to this value

sendMessage(message, timeout=5.0)

Send a command to the photometer and wait an alloted timeout for a response.

setMaxAttempts(maxAttempts)

Changes the number of attempts to send a message and read the output Typically this should be low initially, if you aren’t sure that the device is setup correctly but then, after the first successful reading, set it higher.

setMode(mode='04')

Set the mode for measurements. Returns True (success) or False

‘04’ means absolute measurements. ‘08’ = peak ‘09’ = cont

See user manual for other modes

PhotoResearch

Supported devices:

  • PR650
  • PR655/PR670
psychopy.hardware.findPhotometer(ports=None, device=None)

Try to find a connected photometer/photospectrometer! PsychoPy will sweep a series of serial ports trying to open them. If a port successfully opens then it will try to issue a command to the device. If it responds with one of the expected values then it is assumed to be the appropriate device.

Parameters:
ports : a list of ports to search

Each port can be a string (e.g. ‘COM1’, ‘’/dev/tty.Keyspan1.1’) or a number (for win32 comports only). If none are provided then PsychoPy will sweep COM0-10 on win32 and search known likely port names on OS X and linux.

device : string giving expected device (e.g. ‘PR650’, ‘PR655’, ‘LS110’).

If this is not given then an attempt will be made to find a device of any type, but this often fails

Returns:

  • An object representing the first photometer found
  • None if the ports didn’t yield a valid response
  • None if there were not even any valid ports (suggesting a driver not being installed)

e.g.:

photom = findPhotometer(device='PR655') #sweeps ports 0 to 10 searching for a PR655
print photom.getLum()
if hasattr(photom, 'getSpectrum'):#can retrieve spectrum (e.g. a PR650)
    print photom.getSpectrum()

psychopy.info - functions for getting information about the system

psychopy.iohub - ioHub event monitoring framework

ioHub monitors for device events in parallel with the PsychoPy experiment execution by running in a separate process than the main PsychoPy script. This means, for instance, that keyboard and mouse event timing is not quantized by the rate at which the window.swap() method is called.

ioHub reports device events to the PsychoPy experiment runtime as they occur. Optionally, events can be saved to a HDF5 file.

All iohub events are timestamped using the PsychoPy global time base (psychopy.core.getTime()). Events can be accessed as a device independent event stream, or from a specific device of interest.

A comprehensive set of examples that each use at least one of the iohub devices is available in the psychopy/demos/coder/iohub folder.

Note

This documentation is in very early stages of being written. Many sections regarding device usage details are simply placeholders. For information on devices or functionality that has not yet been migrated to the psychopy documentation, please visit the somewhat outdated original ioHub doc’s.

Using psychopy.iohub:
psychopy.iohub Specific Requirements
Computer Specifications

The design / requirements of your experiment itself can obviously influence what the minimum computer specification should be to provide good timing / performance.

The dual process design when running using psychopy.iohub also influences the minimum suggested specifications as follows:

  • Intel i5 or i7 CPU. A minimum of two CPU cores is needed.
  • 8 GB of RAM
  • Windows 7 +, OS X 10.7.5 +, or Linux Kernel 2.6 +

Please see the Recommended hardware section for further information that applies to PsychoPy in general.

Usage Considerations

When using psychopy.iohub, the following constrains should be noted:

  1. The pyglet graphics backend must be used; pygame is not supported.
  2. ioHub devices that report position data use the unit type defined by the PsychoPy Window. However, position data is reported using the full screen area and size the window was created in. Therefore, for accurate window position reporting, the PsychoPy window must be made full screen.
  3. On OS X, Assistive Device support must be enabled when using psychopy.iohub.
    • For OS X 10.7 - 10.8.5, instructions can be found here.
    • For OS X 10.9 +, the program being used to start your experiment script must be specifically authorized. Example instructions on authorizing an OS X 10.9 + app can be viewed here.
Software Requirements

When running PsychoPy using the OS X or Windows standalone distribution, all the necessary python package dependencies have already been installed, so the rest of this section can be skipped.

Note

Hardware specific software may need to be installed depending on the device being used. See the documentation page for the specific device hardware in question for further details.

If psychopy.iohub is being manually installed, first ensure the python packages listed in the Dependencies section of the manual are installed.

psychopy.iohub requires the following extra dependencies to be installed:

  1. psutil (version 1.2 +) A cross-platform process and system utilities module for Python.
  2. msgpack It’s like JSON. but fast and small.
  3. greenlet The greenlet package is a spin-off of Stackless, a version of CPython that supports micro-threads called “tasklets”.
  4. gevent (version 1.0 or greater)** A coroutine-based Python networking library.
  5. numexpr Fast numerical array expression evaluator for Python and NumPy.
  6. pytables PyTables is a package for managing hierarchical datasets.
  7. pyYAML PyYAML is a YAML parser and emitter for Python.
Windows installations only
  1. pyHook Python wrapper for global input hooks in Windows.
Linux installations only
  1. python-xlib The Python X11R6 client-side implementation.
OSX installations only
  1. pyobjc : A Python ObjectiveC binding.
Starting the psychopy.iohub Process

To use ioHub within your PsychoPy Coder experiment script, ioHub needs to be started at the start of the experiment script. The easiest way to do this is by calling the launchHubServer function.

launchHubServer function
ioHubConnection Class

The psychopy.iohub.ioHubConnection object returned from the launchHubServer function provides methods for controlling the iohub process and accessing iohub devices and events.

ioHub Devices and Device Events

psychopy.iohub supports a large and growing set of supported devices. Details for each device can be found in the following sections.

Keyboard Device
The iohub Keyboard device provides methods to:
  • Check for any new keyboard events that have occurred since the last time keyboard events were checked or cleared.
  • Wait until a keyboard event occurs.
  • Clear the device of any unread events.
  • Get a list of all currently pressed keys.
Keyboard Events

The Keyboard device can return two types of events, which represent key press and key release actions on the keyboard.

KeyboardPress Event
KeyboardRelease Event
Mouse Device and Events

TBC

Computer Device

TBC

XInput Gamepad Device and Events

TBC

Eye Tracker Devices and Events

TBC

Serial Port Device and Events

TBC

Analog Input Device and Events

TBC

Touch Screen Device and Events

TBC

psychopy.logging - control what gets logged

Provides functions for logging error and other messages to one or more files and/or the console, using python’s own logging module. Some warning messages and error messages are generated by PsychoPy itself. The user can generate more using the functions in this module.

There are various levels for logged messages with the following order of importance: ERROR, WARNING, DATA, EXP, INFO and DEBUG.

When setting the level for a particular log target (e.g. LogFile) the user can set the minimum level that is required for messages to enter the log. For example, setting a level of INFO will result in INFO, EXP, DATA, WARNING and ERROR messages to be recorded but not DEBUG messages.

By default, PsychoPy will record messages of WARNING level and above to the console. The user can silence that by setting it to receive only CRITICAL messages, (which PsychoPy doesn’t use) using the commands:

from psychopy import logging
logging.console.setLevel(logging.CRITICAL)
class psychopy.logging.LogFile(f=None, level=30, filemode='a', logger=None, encoding='utf8')

A text stream to receive inputs from the logging system

Create a log file as a target for logged entries of a given level

Parameters:
  • f:

    this could be a string to a path, that will be created if it doesn’t exist. Alternatively this could be a file object, sys.stdout or any object that supports .write() and .flush() methods

  • level:

    The minimum level of importance that a message must have to be logged by this target.

  • mode: ‘a’, ‘w’

    Append or overwrite existing log file

setLevel(level)

Set a new minimal level for the log file/stream

write(txt)

Write directy to the log file (without using logging functions). Useful to send messages that only this file receives

psychopy.logging.addLevel(level, levelName)

Associate ‘levelName’ with ‘level’.

This is used when converting levels to text during message formatting.

psychopy.logging.critical(message)

Send the message to any receiver of logging info (e.g. a LogFile) of level log.CRITICAL or higher

psychopy.logging.data(msg, t=None, obj=None)

Log a message about data collection (e.g. a key press)

usage::
log.data(message)

Sends the message to any receiver of logging info (e.g. a LogFile) of level log.DATA or higher

psychopy.logging.debug(msg, t=None, obj=None)

Log a debugging message (not likely to be wanted once experiment is finalised)

usage::
log.debug(message)

Sends the message to any receiver of logging info (e.g. a LogFile) of level log.DEBUG or higher

psychopy.logging.error(message)

Send the message to any receiver of logging info (e.g. a LogFile) of level log.ERROR or higher

psychopy.logging.exp(msg, t=None, obj=None)

Log a message about the experiment (e.g. a new trial, or end of a stimulus)

usage::
log.exp(message)

Sends the message to any receiver of logging info (e.g. a LogFile) of level log.EXP or higher

psychopy.logging.fatal(msg, t=None, obj=None)

log.critical(message) Send the message to any receiver of logging info (e.g. a LogFile) of level log.CRITICAL or higher

psychopy.logging.flush(logger=<psychopy.logging._Logger instance>)

Send current messages in the log to all targets

psychopy.logging.getLevel(level)

Return the textual representation of logging level ‘level’.

If the level is one of the predefined levels (CRITICAL, ERROR, WARNING, INFO, DEBUG) then you get the corresponding string. If you have associated levels with names using addLevelName then the name you have associated with ‘level’ is returned.

If a numeric value corresponding to one of the defined levels is passed in, the corresponding string representation is returned.

Otherwise, the string “Level %s” % level is returned.

psychopy.logging.info(msg, t=None, obj=None)

Log some information - maybe useful, maybe not

usage::
log.info(message)

Sends the message to any receiver of logging info (e.g. a LogFile) of level log.INFO or higher

psychopy.logging.log(msg, level, t=None, obj=None)

Log a message

usage::
log(level, msg, t=t, obj=obj)

Log the msg, at a given level on the root logger

psychopy.logging.setDefaultClock(clock)

Set the default clock to be used to reference all logging times. Must be a psychopy.core.Clock object. Beware that if you reset the clock during the experiment then the resets will be reflected here. That might be useful if you want your logs to be reset on each trial, but probably not.

psychopy.logging.warn(msg, t=None, obj=None)

log.warning(message)

Sends the message to any receiver of logging info (e.g. a LogFile) of level log.WARNING or higher

psychopy.logging.warning(message)

Sends the message to any receiver of logging info (e.g. a LogFile) of level log.WARNING or higher

flush()
psychopy.logging.flush(logger=<psychopy.logging._Logger instance>)

Send current messages in the log to all targets

setDefaultClock()
psychopy.logging.setDefaultClock(clock)

Set the default clock to be used to reference all logging times. Must be a psychopy.core.Clock object. Beware that if you reset the clock during the experiment then the resets will be reflected here. That might be useful if you want your logs to be reset on each trial, but probably not.

psychopy.microphone - Capture and analyze sound

(Available as of version 1.74.00; Advanced features available as of 1.77.00)

Overview

AudioCapture() allows easy audio recording and saving of arbitrary sounds to a file (wav format). AudioCapture will likely be replaced entirely by AdvAudioCapture in the near future.

AdvAudioCapture() can do everything AudioCapture does, and also allows onset-marker sound insertion and detection, loudness computation (RMS audio “power”), and lossless file compression (flac). The Builder microphone component now uses AdvAudioCapture by default.

Speech2Text() provides speech recognition (courtesy of google), with about 1-2 seconds latency for a 2 sec voice recording. Note that the sound files are sent to google over the internet. Intended for within-experiment processing (near real-time, 1-2s delayed), in which priority is given to keeping an experiment session moving along, even if that means skipping a slow response once in a while. See coder demo > input > speech_recognition.py.

Eventually, other features are planned, including: speech onset detection (to automatically estimate vocal RT for a given speech sample), and interactive visual inspection of sound waveform, with playback and manual onset determination (= the “gold standard” for RT).

Audio Capture
Speech recognition
Misc

PsychoPy provides lossless compression using FLAC codec. (This requires that flac is installed on your computer. It is not included with PsychoPy by default, but you can download for free from http://xiph.org/flac/ ). Functions for file-oriented Discrete Fourier Transform and RMS computation are also provided.

psychopy.misc - miscellaneous routines for converting units etc

psychopy.misc has gradually grown very large and the underlying code for its functions are distributed in multiple files. You can still (at least for now) import the functions here using from psychopy import misc but you can also import them from the tools sub-modules.

From psychopy.tools.filetools
toFile(filename, data) save data (of any sort) as a pickle file
fromFile(filename) load data (of any sort) from a pickle file
mergeFolder(src, dst[, pattern]) Merge a folder into another.
From psychopy.tools.colorspacetools
dkl2rgb
dklCart2rgb
rgb2dklCart
hsv2rgb
lms2rgb
rgb2lms
dkl2rgb
From psychopy.tools.coordinatetools
cart2pol
cart2sph
pol2cart
sph2cart
From psychopy.tools.monitorunittools
convertToPix
cm2pix
cm2deg
deg2cm
deg2pix
pix2cm
pix2deg
From psychopy.tools.imagetools
array2image
image2array
makeImageAuto
From psychopy.tools.plottools
plotFrameIntervals(intervals) Plot a histogram of the frame intervals.
From psychopy.tools.typetools
float_uint8
uint8_float
float_uint16
From psychopy.tools.unittools
radians
degrees

psychopy.monitors - for those that don’t like Monitor Center

Most users won’t need to use the code here. In general the Monitor Centre interface is sufficient and monitors setup that way can be passed as strings to Window s. If there is some aspect of the normal calibration that you wish to override. eg:

from psychopy import visual, monitors
mon = monitors.Monitor('SonyG55')#fetch the most recent calib for this monitor
mon.setDistance(114)#further away than normal?
win = visual.Window(size=[1024,768], monitor=mon)

You might also want to fetch the Photometer class for conducting your own calibrations

Monitor

GammaCalculator

getAllMonitors()
findPR650()
getLumSeriesPR650()
getRGBspectra()
gammaFun()
gammaInvFun()
makeDKL2RGB()
makeLMS2RGB()

psychopy.parallel - functions for interacting with the parallel port

This module provides read/write access to the parallel port for Linux or Windows.

The Parallel class described below will attempt to load whichever parallel port driver is first found on your system and should suffice in most instances. If you need to use a specific driver then, instead of using ParallelPort shown below you can use one of the following as drop-in replacemnts, forcing the use of a specific driver:

  • psychopy.parallel.PParallelInpOut32
  • psychopy.parallel.PParallelDLPortIO
  • psychopy.parallel.PParallelLinux

Either way, each instance of the class can provide access to a different parallel port.

There is also a legacy API which consists of the routines which are directly in this module. That API assumes you only ever want to use a single parallel port at once.

psychopy.parallel.ParallelPort

alias of PParallelLinux

Legacy functions

We would strongly recommend you use the class above instead: these are provided for backwards compatibility only.

parallel.setPortAddress(address=888)

Set the memory address or device node for your parallel port of your parallel port, to be used in subsequent commands

common port addresses:

LPT1 = 0x0378 or 0x03BC
LPT2 = 0x0278 or 0x0378
LPT3 = 0x0278
or for Linux::
/dev/parport0

This routine will attempt to find a usable driver depending on your platform

parallel.setData(data)

Set the data to be presented on the parallel port (one ubyte). Alternatively you can set the value of each pin (data pins are pins 2-9 inclusive) using setPin()

examples:

parallel.setData(0) #sets all pins low
parallel.setData(255) #sets all pins high
parallel.setData(2) #sets just pin 3 high (remember that pin2=bit0)
parallel.setData(3) #sets just pins 2 and 3 high

you can also convert base 2 to int v easily in python:

parallel.setData( int("00000011",2) )#pins 2 and 3 high
parallel.setData( int("00000101",2) )#pins 2 and 4 high
parallel.setPin(pinNumber, state)

Set a desired pin to be high(1) or low(0).

Only pins 2-9 (incl) are normally used for data output:

parallel.setPin(3, 1)#sets pin 3 high
parallel.setPin(3, 0)#sets pin 3 low
parallel.readPin(pinNumber)

Determine whether a desired (input) pin is high(1) or low(0).

Pins 2-13 and 15 are currently read here

psychopy.serial - functions for interacting with the serial port

PsychoPy is compatible with Chris Liechti’s pyserial package. You can use it like this:

import serial
ser = serial.Serial(0, 19200, timeout=1)  # open first serial port
#ser = serial.Serial('/dev/ttyS1', 19200, timeout=1)#or something like this for Mac/Linux machines
ser.write('someCommand')
line = ser.readline()   # read a '\n' terminated line
ser.close()

Ports are fully configurable with all the options you would expect of RS232 communications. See http://pyserial.sourceforge.net for further details and documentation.

pyserial is packaged in the Standalone (Windows and Mac distributions), for manual installations you should install this yourself.

psychopy.sound - play various forms of sound

Sound

PsychoPy currently supports a choice of two sound libraries: pyo, or pygame. Select which will be used via the audioLib preference. sound.Sound() will then refer to either SoundPyo or SoundPygame. This can be set on a per-experiment basis by importing preferences, and setting the audioLib preference to use.

It is important to use sound.Sound() in order for proper initialization of the relevant sound library. Do not use sound.SoundPyo or sound.SoundPygame directly. Because they offer slightly different features, the differences between pyo and pygame sounds are described here. Pygame sound is more thoroughly tested, whereas pyo offers lower latency and more features.

psychopy.tools - miscellaneous tools

Container for all miscellaneous functions and classes

psychopy.tools.colorspacetools
dkl2rgb
dklCart2rgb
rgb2dklCart
hsv2rgb
lms2rgb
rgb2lms
dkl2rgb
Function details
psychopy.tools.coordinatetools
cart2pol
cart2sph
pol2cart
sph2cart
Function details
psychopy.tools.filetools

Functions and classes related to file and directory handling

psychopy.tools.filetools.toFile(filename, data)

save data (of any sort) as a pickle file

simple wrapper of the cPickle module in core python

psychopy.tools.filetools.fromFile(filename)

load data (of any sort) from a pickle file

simple wrapper of the cPickle module in core python

psychopy.tools.filetools.mergeFolder(src, dst, pattern=None)

Merge a folder into another.

Existing files in dst folder with the same name will be overwritten. Non-existent files/folders will be created.

psychopy.tools.filetools.openOutputFile(fileName, append=False, delim=None, fileCollisionMethod='rename', encoding='utf-8')

Open an output file (or standard output) for writing.

Parameters:
fileName : string
The desired output file name.
append : bool, optional
If True, append data to an existing file; otherwise, overwrite it with new data. Defaults to True, i.e. appending.
delim : string, optional
The delimiting character(s) between values. For a CSV file, this would be a comma. For a TSV file, it would be `` . Defaults to ``None.
fileCollisionMethod : string, optional
How to handle filename collisions. This is ignored if append is set to True. Defaults to rename.
encoding : string, optional
The encoding to use when writing the file. Defaults to 'utf-8'.
Returns:
f : file
A writable file handle.
Notes:

If no known filename extension is given, and the delimiter is a comma, the extension .csv will be chosen automatically. If the extension is unknown and the delimiter is a tab, the extension will be .tsv. .txt will be chosen otherwise.

psychopy.tools.filetools.genDelimiter(fileName)

Return a delimiter based on a filename.

Parameters:
fileName : string
The output file name.
Returns:
delim : string
A delimiter picked based on the supplied filename. This will be , if the filename extension is .csv, and a tabulator character otherwise.
psychopy.tools.imagetools
array2image
image2array
makeImageAuto
Function details
psychopy.tools.monitorunittools
convertToPix
cm2deg
cm2pix
deg2cm
deg2pix
pix2cm
pix2deg
Function details
psychopy.tools.plottools

Functions and classes related to plotting

psychopy.tools.plottools.plotFrameIntervals(intervals)

Plot a histogram of the frame intervals.

Where intervals is either a filename to a file, saved by Window.saveFrameIntervals or simply a list (or array) of frame intervals

psychopy.tools.typetools
psychopy.tools.unittools

psychopy.web - Web methods

Test for access
psychopy.web.haveInternetAccess(forceCheck=False)

Detect active internet connection or fail quickly.

If forceCheck is False, will rely on a cached value if possible.

psychopy.web.requireInternetAccess(forceCheck=False)

Checks for access to the internet, raise error if no access.

Upload a file over http
psychopy.web.upload(selector, filename, basicAuth=None, host=None, https=False, log=True)

Upload a local file over the internet to a configured http server.

This method handshakes with a php script on a remote server to transfer a local file to another machine via http (using POST).

Returns “success” plus a sha256 digest of the file on the server and a byte count. If the upload was not successful, an error code is returned (eg, “too_large” if the file size exceeds the limit specified server-side in up.php, or “no_file” if there was no POST attachment).

Note

The server that receives the files needs to be configured before uploading will work. php files and notes for a sys-admin are included in psychopy/contrib/http/. In particular, the php script up.php needs to be copied to the server’s web-space, with appropriate permissions and directories, including apache basic auth and https (if desired). The maximum size for an upload can be configured within up.php

A configured test-server is available; see the Coder demo for details (upload size is limited to ~1500 characters for the demo).

Parameters:

selector : (required, string)

a standard URL of the form http://host/path/to/up.php, e.g., http://upload.psychopy.org/test/up.php

Note

Limited https support is provided (see below).

filename : (required, string)

the path to the local file to be transferred. The file can be any format: text, utf-8, binary. All files are hex encoded while in transit (increasing the effective file size).

Note

Encryption (beta) is available as a separate step. That is, first encrypt() the file, then upload() the encrypted file in the same way that you would any other file.

basicAuth : (optional)
apache ‘user:password’ string for basic authentication. If a basicAuth value is supplied, it will be sent as the auth credentials (in cleartext); using https will encrypt the credentials.
host : (optional)
The default process is to extract host information from the selector. The host option allows you to specify a host explicitly (i.e., if it differs from the selector).
https : (optional)

If the remote server is configured to use https, passing the parameter https=True will encrypt the transmission including all data and basicAuth credentials. It is approximately as secure as using a self-signed X.509 certificate.

An important caveat is that the authenticity of the certificate returned from the server is not checked, and so the certificate could potentially be spoofed (see the warning under HTTPSConnection http://docs.python.org/library/httplib.html). Overall, using https can still be much more secure than not using it. The encryption is good, but that of itself does not eliminate all risk. Importantly, it is not as secure as one might expect, given that all major web browsers do check certificate authenticity. The idea behind this parameter is to require people to explicitly indicate that they want to proceed anyway, in effect saying “I know what I am doing and accept the risks (of using un-verified certificates)”.

Example:

See Coder demo / misc / http_upload.py

Author: Jeremy R. Gray, 2012

Proxy set-up and testing
psychopy.web.setupProxy(log=True)

Set up the urllib proxy if possible.

The function will use the following methods in order to try and determine proxies:
  1. standard urllib2.urlopen (which will use any statically-defined http-proxy settings)
  2. previous stored proxy address (in prefs)
  3. proxy.pac files if these have been added to system settings
  4. auto-detect proxy settings (WPAD technology)
Returns:True (success) or False (failure)

Indices and tables

Further information:

Troubleshooting

Regrettably, PsychoPy is not bug-free. Running on all possible hardware and all platforms is a big ask. That said, a huge number of bugs have been resolved by the fact that there are literally 1000s of people using the software that have contributed either bug reports and/or fixes.

Below are some of the more common problems and their workarounds, as well as advice on how to get further help.

The application doesn’t start

You may find that you try to launch the PsychoPy application, the splash screen appears and then goes away and nothing more happens. What this means is that an error has occurred during startup itself.

Commonly, the problem is that a preferences file is somehow corrupt. To fix that see Cleaning preferences and app data, below.

If resetting the preferences files doesn’t help then we need to get to an error message in order to work out why the application isn’t starting. The way to get that message depends on the platform (see below).

Windows users (starting from the Command Prompt):

  1. Did you get an error message that “This application failed to start because the application configuration is incorrect. Reinstalling the application may fix the problem”? If so that indicates you need to update your .NET installation to SP1 .

  2. open a DOS Command Prompt (terminal):
    1. go to the Windows Start menu
    2. select Run... and type in cmd <Return>
  3. paste the following into that window (Ctrl-V doesn’t work but you can right-click and select Paste). Replace VERSION with your version number (e.g. 1.61.03):

    "C:\Program Files\PsychoPy2\python.exe" "C:\Program Files\PsychoPy2\Lib\site-packages\PsychoPy-VERSION-py2.6.egg\psychopy\app\psychopyApp.py"
    
  4. when you hit <return> you will hopefully get a moderately useful error message that you can Contribute to the Forum (mailing list)

Mac users:
  1. open the Console app (open spotlight and type console)
  2. if there are a huge number of messages there you might find it easiest to clear them (the brush icon) and then start PsychoPy again to generate a new set of messages

I run a Builder experiment and nothing happens

An error message may have appeared in a dialog box that is hidden (look to see if you have other open windows somewhere).

An error message may have been generated that was sent to output of the Coder view:
  1. go to the Coder view (from the Builder>View menu if not visible)
  2. if there is no Output panel at the bottom of the window, go to the View menu and select Output
  3. try running your experiment again and see if an error message appears in this Output view

If you still don’t get an error message but the application still doesn’t start then manually turn off the viewing of the Output (as below) and try the above again.

Manually turn off the viewing of output

Very occasionally an error will occur that crashes the application after the application has opened the Coder Output window. In this case the error message is still not sent to the console or command prompt.

To turn off the Output view so that error messages are sent to the command prompt/terminal on startup, open your appData.cfg file (see Cleaning preferences and app data), find the entry:

[coder]
showOutput = True

and set it to showOutput = False (note the capital ‘F’).

Use the source (Luke?)

PsychoPy comes with all the source code included. You may not think you’re much of a programmer, but have a go at reading the code. You might find you understand more of it than you think!

To have a look at the source code do one of the following:
  • when you get an error message in the Coder click on the hyperlinked error lines to see the relevant code

  • on Windows
    • go to Program FilesPsychoPy2Libsite-packagesPsychopy
    • have a look at some of the files there
  • on Mac
    • right click the PsychoPy app and select Show Package Contents
    • navigate to Contents/Resources/lib/python2.6/psychopy

Cleaning preferences and app data

Every time you shut down PsychoPy (by normal means) your current preferences and the state of the application (the location and state of the windows) are saved to disk. If PsychoPy is crashing during startup you may need to edit those files or delete them completely.

On OS X and Linux the files are:

~/.psychopy2/appData.cfg
~/.psychopy2/userPrefs.cfg

On Windows they are:

${DOCS AND SETTINGS}\{USER}\Application Data\psychopy2\appData.cfg
${DOCS AND SETTINGS}\{USER}\Application Data\psychopy2\userPrefs.cfg

The files are simple text, which you should be able to edit in any text editor. Particular changes that you might need to make:

If the problem is that you have a corrupt experiment file or script that is trying and failing to load on startup, you could simply delete the appData.cfg file. Please also Contribute to the Forum (mailing list) a copy of the file that isn’t working so that the underlying cause of the problem can be investigated (google first to see if it’s a known issue).

Recipes (“How-to”s)

Below are various tips/tricks/recipes/how-tos for PsychoPy. They involve something that is a little more involved than you would find in FAQs, but too specific for the manual as such (should they be there?).

Adding external modules to Standalone PsychoPy

You might find that you want to add some additional Python module/package to your Standalone version of PsychoPy. To do this you need to:

  • download a copy of the package (make sure it’s for Python 2.7 on your particular platform)
  • unzip/open it into a folder
  • add that folder to the path of PsychoPy by one of the methods below

Avoid adding the entire path (e.g. the site-packages folder) of separate installation of Python, because that may contain conflicting copies of modules that PsychoPy is also providing.

Using preferences

As of version 1.70.00 you can do this using the PsychoPy preferences/general. There you will find preference for paths which can be set to a list of strings e.g. [‘/Users/jwp/code’, ‘~/code/thirdParty’]

These only get added to the Python path when you import psychopy (or one of the psychopy packages) in your script.

Adding a .pth file

An alternative is to add a file into the site-packages folder of your application. This file should be pure text and have the extension .pth to indicate to Python that it adds to the path.

On win32 the site-packages folder will be something like:

C:/Program Files/PsychoPy2/lib/site-packages

On OS X you need to right-click the application icon, select ‘Show Package Contents’ and then navigate down to Contents/Resources/lib/python2.6. Put your .pth file here, next to the various libraries.

The advantage of this method is that you don’t need to do the import psychopy step. The downside is that when you update PsychoPy to a new major release you’ll need to repeat this step (patch updates won’t affect it though).

Animation

General question: How can I animate something?

Conceptually, animation just means that you vary some aspect of the stimulus over time. So the key idea is to draw something slightly different on each frame. This is how movies work, and the same principle can be used to create scrolling text, or fade-in / fade-out effects, and the like.

(copied & pasted from the email list; see the list for people’s names and a working script.)

Scrolling text

Key idea: Vary the position of the stimulus across frames.

Question: How can I produce scrolling text (like html’s <marquee behavior = “scroll” > directive)?

Answer: PsychoPy has animation capabilities built-in (it can even produce and export movies itself (e.g. if you want to show your stimuli in presentations)). But here you just want to animate stimuli directly.

e.g. create a text stimulus. In the ‘pos’ (position) field, type:

[frameN, 0]

and select “set every frame” in the popup button next to that field.

Push the Run button and your text will move from left to right, at one pixel per screen refresh, but stay at a fixed y-coordinate. In essence, you can enter an arbitrary formula in the position field and the stimulus will be-redrawn at a new position on each frame. frameN here refers to the number of frames shown so far, and you can extend the formula to produce what you need.

You might find performance issues (jittering motion) if you try to render a lot of text in one go, in which case you may have to switch to using images of text.

I wanted my text to scroll from right to left. So if you keep your eyes in the middle of the screen the next word to read would come from the right (as if you were actually reading text). The original formula posted above scrolls the other way. So, you have to put a negative sign in front of the formula for it to scroll the other way. You have to change the units to pixel. Also, you have to make sure you have an end time set, otherwise it just flickers. I also set my letter height to 100 pixels. The other problem I had was that I wanted the text to start blank and scroll into the screen. So, I wrote

[2000-frameN, 0]

and this worked really well.

Fade-in / fade-out effects

Key idea: vary the opacity of the stimulus over frames.

Question: I’d like to present an image with the image appearing progressively and disappearing progressively too. How to do that?

Answer: The Patch stimulus has an opacity field. Set the button next to it to be “set every frame” so that its value can be changed progressively, and enter an equation in the box that does what you want.

e.g. if your screen refresh rate is 60 Hz, then entering:

frameN/120

would cycle the opacity linearly from 0 to 1.0 over 2s (it will then continue incrementing but it doesn’t seem to matter if the value exceeds 1.0).

Using a code component might allow you to do more sophisticated things (e.g. fade in for a while, hold it, then fade out). Or more simply, you just create multiple successive Patch stimulus components, each with a different equation or value in the opacity field depending on their place in the timeline.

Building an application from your script

A lot of people ask how they can build a standalone application from their Python script. Usually this is because they have a collaborator and want to just send them the experiment.

In general this is not advisable - the resulting bundle of files (single file on OS X) will be on the order of 100Mb and will not provide the end user with any of the options that they might need to control the task (for example, Monitor Center won’t be provided so they can’t to calibrate their monitor). A better approach in general is to get your collaborator to install the Standalone PsychoPy on their own machine, open your script and press run. (You don’t send a copy of Microsoft Word when you send someone a document - you expect the reader to install it themself and open the document).

Nonetheless, it is technically possible to create exe files on Windows, and Ricky Savjani (savjani at bcm.edu) has kindly provided the following instructions for how to do it. A similar process might be possible on OS X using py2app - if you’ve done that then feel free to contribute the necessary script or instructions.

Using py2exe to build an executable

Instructions:

  1. Download and install py2exe (http://www.py2exe.org/)

  2. Develop your PsychoPy script as normal

  3. Copy this setup.py file into the same directory as your script

  4. Change the Name of progName variable in this file to the Name of your desired executable program name

  5. Use cmd (or bash, terminal, etc.) and run the following in the directory of your the two files:

    python setup.py py2exe

  6. Open the ‘dist’ directory and run your executable

A example setup.py script:

#   Created 8-09-2011
#   Ricky Savjani
#   (savjani at bcm.edu)

#import necessary packages
from distutils.core import setup
import os, matplotlib
import py2exe

#the name of your .exe file
progName = 'MultipleSchizophrenia.py'

#Initialize Holder Files
preference_files = []
app_files = []
my_data_files=matplotlib.get_py2exe_datafiles()

#define which files you want to copy for data_files
for files in os.listdir('C:\\Program Files\\PsychoPy2\\Lib\\site-packages\\PsychoPy-1.65.00-py2.6.egg\\psychopy\\preferences\\'):
    f1 = 'C:\\Program Files\\PsychoPy2\\Lib\\site-packages\\PsychoPy-1.65.00-py2.6.egg\\psychopy\\preferences\\' + files
    preference_files.append(f1)

#if you might need to import the app files
#for files in os.listdir('C:\\Program Files\\PsychoPy2\\Lib\\site-packages\\PsychoPy-1.65.00-py2.6.egg\\psychopy\\app\\'):
#    f1 = 'C:\\Program Files\\PsychoPy2\\Lib\\site-packages\\PsychoPy-1.65.00-py2.6.egg\\psychopy\\app\\' + files
#    app_files.append(f1)

#all_files = [("psychopy\\preferences", preference_files),("psychopy\\app", app_files), my_data_files[0]]

#combine the files
all_files = [("psychopy\\preferences", preference_files), my_data_files[0]]

#define the setup
setup(
                console=[progName],
                data_files = all_files,
                options = {
                    "py2exe":{
                        "skip_archive": True,
                        "optimize": 2
                    }
                }
)

Builder - providing feedback

If you’re using the Builder then the way to provide feedback is with a Code Component to generate an appropriate message (and then a Text Component to present that message). PsychoPy will be keeping track of various aspects of the stimuli and responses for you throughout the experiment and the key is knowing where to find those.

The following examples assume you have a Loop called trials, containing a Routine with a Keyboard Component called key_resp. Obviously these need to be adapted in the code below to fit your experiment.

Note

The following generate strings use python ‘formatted strings’. These are very powerful and flexible but a little strange when you aren’t used to them (they contain odd characters like %.2f). See Generating formatted strings for more info.

Feedback after a trial

This is actually demonstrated in the demo, ExtendedStroop (in the Builder>demos menu, unpack the demos and then look in the menu again. tada!)

If you have a Keyboard Component called key_resp then, after every trial you will have the following variables:

key_resp.keys #a python list of keys pressed
key_resp.rt #the time to the first key press
key_resp.corr #None, 0 or 1, if you are using 'store correct'

To create your msg, insert the following into the ‘start experiment` section of the Code Component:

msg='doh!'#if this comes up we forgot to update the msg!

and then insert the following into the Begin Routine section (this will get run every repeat of the routine):

if len(key_resp.keys)<1:
    msg="Failed to respond"
elif resp.corr:#stored on last run routine
    msg="Correct! RT=%.3f" %(resp.rt)
else:
    msg="Oops! That was wrong"
Feedback after a block

In this case the feedback routine would need to come after the loop (the block of trials) and the message needs to use the stored data from the loop rather than the key_resp directly. Accessing the data from a loop is not well documented but totally possible.

In this case, to get all the keys pressed in a numpy array:

trials.data['key_resp.keys'] #numpy array with size=[ntrials,ntypes]

If you used the ‘Store Correct’ feature of the Keyboard Component (and told psychopy what the correct answer was) you will also have a variable:

#numpy array storing whether each response was correct (1) or not (0)
trials.data['resp.corr']

So, to create your msg, insert the following into the ‘start experiment` section of the Code Component:

msg='doh!'#if this comes up we forgot to update the msg!

and then insert the following into the Begin Routine section (this will get run every repeat of the routine):

nCorr = trials.data['key_resp.corr'].sum() #.std(), .mean() also available
meanRt = trials.data['key_resp.rt'].mean()
msg = "You got %i trials correct (rt=%.2f)" %(nCorr,meanRt)
Draw your message to the screen

Using one of the above methods to generate your msg in a Code Component, you then need to present it to the participant by adding a Text Component to your feedback Routine and setting its text to $msg.

Warning

The Text Component needs to be below the Code Component in the Routine (because it needs to be updated after the code has been run) and it needs to set every repeat.

Builder - terminating a loop

People often want to terminate their Loops before they reach the designated number of trials based on subjects’ responses. For example, you might want to use a Loop to repeat a sequence of images that you want to continue until a key is pressed, or use it to continue a training period, until a criterion performance is reached.

To do this you need a Code Component inserted into your routine. All loops have an attribute called finished which is set to True or False (in Python these are really just other names for 1 and 0). This finished property gets checked on each pass through the loop. So the key piece of code to end a loop called trials is simply:

trials.finished=True #or trials.finished=1 if you prefer

Of course you need to check the condition for that with some form of if statement.

Example 1: You have a change-blindness study in which a pair of images flashes on and off, with intervening blanks, in a loop called presentationLoop. You record the key press of the subject with a Keyboard Component called resp1. Using the ‘ForceEndTrial’ parameter of resp1 you can end the current cycle of the loop but to end the loop itself you would need a Code Component. Insert the following two lines in the End Routine parameter for the Code Component, which will test whether more than zero keys have been pressed:

if len(resp1.keys)>0:
        presentationLoop.finished=1

Example 2: Sometimes you may have more possible trials than you can actually display. By default, a loop will present all possible trials (nReps * length-of-list). If you only want to present the first 10 of all possible trials, you can use a code component to count how many have been shown, and then finish the loop after doing 10.

This example assumes that your loop is named ‘trials’. You need to add two things, the first to initialize the count, and the second to update and check it.

Begin Experiment:

myCount = 0

Begin Routine:

myCount = myCount + 1
if myCount > 10:
    trials.finished = True

Note

In Python there is no end to finish an if statement. The content of the if or of a for-loop is determined by the indentation of the lines. In the above example only one line was indented so that one line will be executed if the statement evaluates to True.

Installing PsychoPy in a classroom (administrators)

For running PsychoPy in a classroom environment it is probably preferable to have a ‘partial’ network installation. The PsychoPy library features frequent new releases, including bug fixes and you want to be able to update machines with these new releases. But PsychoPy depends on many other python libraries (over 200Mb in total) that tend not to change so rapidly, or at least not in ways critical to the running of experiments. If you install the whole PsychoPy application on the network then all of this data has to pass backwards and forwards, and starting the app will take even longer than normal.

The basic aim of this document is to get to a state whereby;

  • Python and the major dependencies of PsychoPy are installed on the local machine (probably a disk image to be copied across your lab computers)
  • PsychoPy itself (only ~2Mb) is installed in a network location where it can be updated easily by the administrator
  • a file is created in the installation that provides the path to the network drive location
  • Start-Menu shortcuts need to be set to point to the local Python but the remote PsychoPy application launcher

Once this is done, the vast majority of updates can be performed simply by replacing the PsychoPy library on the network drive.

1. Install dependencies locally

Download the latest version of the Standalone PsychoPy distribution, and run as administrator. This will install a copy of Python and many dependencies to a default location of

C:\Program Files\PsychoPy2\
2. Move the PsychoPy to the network

You need a network location that is going to be available, with read-only access, to all users on your machines. You will find all the contents of PsychoPy itself at something like this (version dependent obviously):

C:\Program Files\PsychoPy2\Lib\site-packages\PsychoPy-1.70.00-py2.6.egg

Move that entire folder to your network location and call it psychopyLib (or similar, getting rid of the version-specific part of the name). Now the following should be a valid path:

<NETWORK_LOC>\psychopyLib\psychopy
3. Update the Python path

The Python installation (in C:\Program Files\PsychoPy2) needs to know about the network location. If Python finds a text file with extension .pth anywhere on its existing path then it will add to the path any valid paths it finds in the file. So create a text file that has one line in it:

<NETWORK_LOC>\psychopyLib

You can test if this has worked. Go to C:\Program Files\PsychoPy2 and double-click on python.exe. You should get a Python terminal window come up. Now try:

>>> import psychopy

If psychopy is not found on the path then there will be an import error. Try adjusting the .pth file, restarting python.exe and importing again.

4. Update the Start Menu

The shortcut in the Windows Start Menu will still be pointing to the local (now non-existent) PsychoPy library. Right-click it to change properties and set the shortcut to point to something like:

"C:\Program Files\PsychoPy2\pythonw.exe" "<NETWORK_LOC>\psychopyLib\psychopy\\app\psychopyApp.py"

You probably spotted from this that the PsychoPy app is simply a Python script. You may want to update the file associations too, so that .psyexp and .py are opened with:

"C:\Program Files\PsychoPy2\pythonw.exe" "<NETWORK_LOC>\psychopyLib\psychopy\app\psychopyApp.py" "%1"

Lastly, to make the shortcut look pretty, you might want to update the icon too. Set the icon’s location to:

"<NETWORK_LOC>\psychopyLib\psychopy\app\Resources\psychopy.ico"
5. Updating to a new version

Fetch the latest .zip release. Unpack it and replace the contents of <NETWORK_LOC>\psychopyLib\ with the contents of the zip file.

Generating formatted strings

A formatted string is a variable which has been converted into a string (text). In python the specifics of how this is done is determined by what kind of variable you want to print.

Example 1: You have an experiment which generates a string variable called text. You want to insert this variable into a string so you can print it. This would be achieved with the following code:

message = 'The result is %s' %(text)

This will produce a variable message which if used in a text object would print the phrase ‘The result is’ followed by the variable text. In this instance %s is used as the variable being entered is a string. This is a marker which tells the script where the variable should be entered. %text tells the script which variable should be entered there.

Multiple formatted strings (of potentially different types) can be entered into one string object:

longMessage = 'Well done %s that took %0.3f seconds' %(info['name'], time)

Some of the handy formatted string types:

>>> x=5
>>> x1=5124
>>> z='someText'
>>> 'show %s' %(z)
'show someText'
>>> '%0.1f' %(x)   #will show as a float to one decimal place
'5.0'
>>> '%3i' %(x) #an integer, at least 3 chars wide, padded with spaces
'  5'
>>> '%03i' %(x) #as above but pad with zeros (good for participant numbers)
'005'

See the python documentation for a more complete list.

Coder - interleave staircases

Often psychophysicists using staircase procedures want to interleave multiple staircases, either with different start points, or for different conditions.

There is now a class, psychopy.data.MultiStairHandler to allow simple access to interleaved staircases of either basic or QUEST types. That can also be used from the Loops in the Builder. The following method allows the same to be created in your own code, for greater options.

The method works by nesting a pair of loops, one to loop through the number of trials and another to loop across the staircases. The staircases can be shuffled between trials, so that they do not simply alternate.

Note

Note the need to create a copy of the info. If you simply do thisInfo=info then all your staircases will end up pointing to the same object, and when you change the info in the final one, you will be changing it for all.

from psychopy import visual, core, data, event
from numpy.random import shuffle
import copy, time #from the std python libs

#create some info to store with the data
info={}
info['startPoints']=[1.5,3,6]
info['nTrials']=10
info['observer']='jwp'

win=visual.Window([400,400])
#---------------------
#create the stimuli
#---------------------

#create staircases
stairs=[]
for thisStart in info['startPoints']:
    #we need a COPY of the info for each staircase 
    #(or the changes here will be made to all the other staircases)
    thisInfo = copy.copy(info)
    #now add any specific info for this staircase
    thisInfo['thisStart']=thisStart #we might want to keep track of this
    thisStair = data.StairHandler(startVal=thisStart, 
        extraInfo=thisInfo,
        nTrials=50, nUp=1, nDown=3,
        minVal = 0.5, maxVal=8, 
        stepSizes=[4,4,2,2,1,1])
    stairs.append(thisStair)
    
for trialN in range(info['nTrials']):
    shuffle(stairs) #this shuffles 'in place' (ie stairs itself is changed, nothing returned)
    #then loop through our randomised order of staircases for this repeat
    for thisStair in stairs:
        thisIntensity = thisStair.next()
        print 'start=%.2f, current=%.4f' %(thisStair.extraInfo['thisStart'], thisIntensity)
        
        #---------------------
        #run your trial and get an input
        #---------------------
        keys = event.waitKeys() #(we can simulate by pushing left for 'correct')
        if 'left' in keys: wasCorrect=True
        else: wasCorrect = False
        
        thisStair.addData(wasCorrect) #so that the staircase adjusts itself
        
    #this trial (of all staircases) has finished
#all trials finished
        
#save data (separate pickle and txt files for each staircase)
dateStr = time.strftime("%b_%d_%H%M", time.localtime())#add the current time
for thisStair in stairs:
    #create a filename based on the subject and start value
    filename = "%s start%.2f %s" %(thisStair.extraInfo['observer'], thisStair.extraInfo['thisStart'], dateStr)
    thisStair.saveAsPickle(filename)
    thisStair.saveAsText(filename)   

Making isoluminant stimuli

From the mailing list (see there for names, etc):

Q1: How can I create colours (RGB) that are isoluminant?

A1: The easiest way to create isoluminant stimuli (or control the luminance content) is to create the stimuli in DKL space and then convert them into RGB space for presentation on the monitor.

More details on DKL space can be found in the section about Color spaces and conversions between DKL and RGB can be found in the API reference for psychopy.misc

Q2: There’s a difference in luminance between my stimuli. How could I correct for that?

I’m running an experiment where I manipulate color chromatic saturation, keeping luminance constant. I’ve coded the colors (red and blue) in rgb255 for 6 saturation values (10%, 20%, 30%, 40%, 50%, 60%, 90%) using a conversion from HSL to RGB color space.

Note that we don’t possess spectrophotometers such as PR650 in our lab to calibrate each color gun. I’ve calibrated the gamma of my monitor psychophysically. Gamma was set to 1.7 (threshold) for gamm(lum), gamma(R), gamma(G), gamma(B). Then I’ve measured the luminance of each stimuli with a Brontes colorimeter. But there’s a difference in luminance between my stimuli. How could I correct for that?

A2: Without a spectroradiometer you won’t be able to use the color spaces like DKL which are designed to help this sort of thing.

If you don’t care about using a specific colour space though you should be able to deduce a series of isoluminant colors manually, because the luminance outputs from each gun should sum linearly. e.g. on my monitor:

maxR=46cd/m2
maxG=114
maxB=15

(note that green is nearly always brightest)

So I could make a 15cd/m2 stimulus using various appropriate fractions of those max values (requires that the screen is genuinely gamma-corrected):

R=0, G=0, B=255
R=255*15/46, G=0, B=0
R=255*7.5/46, G=255*15/114, B=0

Note that, if you want a pure fully-saturated blue, then you’re limited by the monitor to how bright you can make your stimulus. If you want brighter colours your blue will need to include some of the other guns (similarly for green if you want to go above the max luminance for that gun).

A2.1. You should also consider that even if you set appropriate RGB values to display your pairs of chromatic stimuli at the same luminance that they might still appear different, particularly between observers (and even if your light measurement device says the luminance is the same, and regardless of the colour space you want to work in). To make the pairs perceptually isoluminant, each observer should really determine their own isoluminant point. You can do this with the minimum motion technique or with heterochromatic flicker photometry.

Adding a web-cam

From the mailing list (see there for names, etc):

“I spent some time today trying to get a webcam feed into my psychopy proj, inside my visual.window. The solution involved using the opencv module, capturing the image, converting that to PIL, and then feeding the PIL into a SimpleImageStim and looping and win.flipping. Also, to avoid looking like an Avatar in my case, you will have to change the default decoder used in PIL fromstring to utilize BGR instead of RGB in the decoding. I thought I would save some time for people in the future who might be interested in using a webcam feed for their psychopy project. All you need to do is import the opencv module into psychopy (importing modules was well documented by psychopy online) and integrate something like this into your psychopy script.”

from psychopy import visual, event, core
import Image, time, pylab, cv, numpy

mywin = visual.Window(allowGUI=False, monitor='testMonitor', units='norm',colorSpace='rgb',color=[-1,-1,-1], fullscr=True)
mywin.setMouseVisible(False)

capture = cv.CaptureFromCAM(0)
img = cv.QueryFrame(capture)
pi = Image.fromstring("RGB", cv.GetSize(img), img.tostring(), "raw", "BGR", 0, 1)
print pi.size
myStim = visual.GratingStim(win=mywin, tex=pi, pos=[0,0.5], size = [0.6,0.6], opacity = 1.0, units = 'norm')
myStim.setAutoDraw(True)

while True:
    img = cv.QueryFrame(capture)
    pi = Image.fromstring("RGB", cv.GetSize(img), img.tostring(), "raw", "BGR", 0, 1)
    myStim.setTex(pi)
    mywin.flip()
    theKey = event.getKeys()
    if len(theKey) != 0:
        break

Frequently Asked Questions (FAQs)

Why is the bits++ demo not working?

So far PsychoPy supports bits++ only in the bits++ mode (rather than mono++ or color++). In this mode, a code (the T-lock code) is written to the lookup table on the bits++ device by drawing a line at the top of the window. The most likely reason that the demo isn’t working for you is that this line is not being detected by the device, and so the lookup table is not being modified. Most of these problems are actually nothing to do with PsychoPy /per se/, but to do with your graphics card and the CRS bits++ box itself.

There are a number of reasons why the T-lock code is not being recognised:

  • the bits++ device is in the wrong mode. Open the utility that CRS supply and make sure you’re in the right mode. Try resetting the bits++ (turn it off and on).
  • the T-lock code is not fully on the screen. If you create a window that’s too big for the screen or badly positioned then the code will be broken/not visible to the device.
  • the T-lock code is on an ‘odd’ pixel.
  • the graphics card is doing some additional filtering (win32). Make sure you turn off any filtering in the advanced display properties for your graphics card
  • the gamma table of the graphics card is not set to be linear (but this should normally be handled by PsychoPy, so don’t worry so much about it).
  • you’ve got a Mac that’s performing temporal dithering (new macs, around 2009). Apple have come up with a new, very annoying idea, where they continuously vary the pixel values coming out of the graphics card every frame to create additional intermediate colours. This will break the T-lock code on 1/2-2/3rds of frames.

Can PsychoPy run my experiment with sub-millisecond timing?

This question is common enough and complex enough to have a section of the manual all of its own. See Timing Issues and synchronisation

Resources (e.g. for teaching)

There are a number of further resources to help learn/teach about PsychoPy.

If you also have PsychoPy materials/course then please let us know so that we can link to them from here too!

P4N 2015: Python for Neuroscience (and Psychology)

There will be a 3-day workshop in April 2014 at Nottingham University. It won’t be only about PsychoPy, but about Python for science more generally and focussing on coding rather than using the Builder interface. We hope this year to run intermediate and novice sessions in parallel (rather than novice only).

Youtube tutorials

Materials for Builder

Materials for Coder


Previous events

For developers:

For Developers

There is a separate mailing list to discuss development ideas and issues.

For developers the best way to use PsychoPy is to install a version to your own copy of python (preferably 2.6 but 2.5 is OK). Make sure you have all the Dependencies, including the extra recommendedPackages for developers.

Don’t install PsychoPy. Instead fetch a copy of the git repository and add this to the python path using a .pth file. Other users of the computer might have their own standalone versions installed without your repository version touching them.

Using the repository

Note

Much of the following is explained with more detail in the nitime documentation, and then in further detail in numerous online tutorials.

Workflow

The use of git and the following workflow allows people to contribute changes that can easily be incorporated back into the project, while (hopefully) maintaining order and consistency in the code. All changes should be tracked and reversible.

  • Create a fork of the central psychopy/psychopy repository

  • Create a local clone of that fork

  • For small changes
    • make the changes directly in the master branch
    • push back to your fork
    • submit a pull request to the central repository
  • For substantial changes (new features)
    • create a branch
    • when finished run unit tests
    • when the unit tests pass merge changes back into the master branch
    • submit a pull request to the central repository
Create your own fork of the central repository

Go to github, create an account and make a fork of the psychopy repository You can change your fork in any way you choose without it affecting the central project. You can also share your fork with others, including the central project.

Fetch a local copy

Install git on your computer. Create and upload an ssh key to your github account - this is necessary for you to push changes back to your fork of the project at github.

Then, in a folder of your choosing fetch your fork:

$ git clone git@github.com:USER/psychopy.git
$ cd psychopy
$ git remote add upstream git://github.com/psychopy/psychopy.git

The last line connects your copy (with read access) to the central server so you can easily fetch any updates to the central repository.

Fetching the latest version

Periodically it’s worth fetching any changes to the central psychopy repository (into your master branch, more on that below):

$ git checkout master
$ git pull upstream master  # here 'master' is the desired branch of psychopy to fetch
Run PsychoPy using your local copy

Now that you’ve fetched the latest version of psychopy using git, you should run this version in order to try out yours/others latest improvements. See this guide on how to permanently run your git version of psychopy instead of the version you previously installed.

Run git version for just one session (Linux and Mac only): If you want to switch between the latest-and-greatest development version from git and the stable version installed on your system, you can choose to only temporarily run the git version. Open a terminal and set a temporary python path to your psychopy git folder:

$ export PYTHONPATH=/path/to/local/git/folder/

To check that worked you should open python in the terminal and try to import psychopy:

$ python
Python 2.7.6 (default, Mar 22 2014, 22:59:56)
[GCC 4.8.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import psychopy

PsychoPy depends on a lot of other packages and you may get a variety of failures to import them until you have them all installed in your custom environment!

Fixing bugs and making minor improvements

You can make minor changes directly in the master branch of your fork. After making a change you need to commit a set of changes to your files with a message. This enables you to group together changes and you will subsequently be able to go back to any previous commit, so your changes are reversible.

I (Jon) usually do this by opening the graphical user interface that comes with git:

$ git gui

From the GUI you can select (or stage in git terminology) the files that you want to include in this particular commit and give it a message. Give a clear summary of the changes for the first line. You can add more details about the changes on lower lines if needed.

If you have internet access then you could also push your changes back up to your fork (which is called your origin by default), either by pressing the push button in the GUI or by closing that and typing:

$ git push
Commit messages

Informative commit messages are really useful when we have to go back through the repository finding the time that a particular change to the code occurred. Precede your message with one or more of the following to help us spot easily if this is a bug fix (which might need pulling into other development branches) or new feature (which we might want to avoid pulling in if it might disrupt existing code).

  • BF : bug fix
  • FF : ‘feature’ fix. This is for fixes to code that hasn’t been released
  • RF : refactoring
  • NF : new feature
  • ENH : enhancement (improvement to existing code)
  • DOC: for all kinds of documentation related commits
  • TEST: for adding or changing tests

NB: The difference between BF and FF is that BF indicates a fix that is appropriate for back-porting to earlier versions, whereas FF indicates a fix to code that has not been released, and so cannot be back-ported.

Share your improvement with others

Only a couple of people have direct write-access to the psychopy repository, but you can get your changes included in upstream by pushing your changes back to your github fork and then submitting a pull request. Communication is good, and hopefully you have already been in touch (via the user or dev lists) about your changes.

When adding an improvement or new feature, consider how it might impact others. Is it likely to be generally useful, or is it something that only you or your lab would need? (It’s fun to contribute, but consider: does it actually need to be part of PsychoPy?) Including more features has a downside in terms of complexity and bloat, so try to be sure that there is a “business case” for including it. If there is, try at all times to be backwards compatible, e.g., by adding a new keyword argument to a method or function (not always possible). If it’s not possible, it’s crucial to get wider input about the possible impacts. Flag situations that would break existing user scripts in your commit messages.

Part of sharing your code means making things sensible to others, which includes good coding style and writing some documentation. You are the expert on your feature, and so are in the best position to elaborate nuances or gotchas. Use meaningful variable names, and include comments in the code to explain non-trivial things, especially the intention behind specific choices. Include or edit the appropriate doc-string, because these are automatically turned into API documentation (via sphinx). Include doc-tests if that would be meaningful. The existing code base has a comment / code ratio of about 28%, which earns it high marks.

For larger changes and especially new features, you might need to create some usage examples, such as a new Coder demo, or even a Builder demo. These can be invaluable for being a starting point from which people can adapt things to the needs of their own situation. This is a good place to elaborate usage-related gotchas.

In terms of style, try to make your code blend in with and look like the existing code (e.g., using about the same level of comments, use camelCase for var names, despite the conflict with the usual PEP – we’ll eventually move to the underscore style, but for now keep everything consistent within the code base). In your own code, write however you like of course. This is just about when contributing to the project.

Add a new feature branch

For more substantial work, you should create a new branch in your repository. Often while working on a new feature other aspects of the code will get broken and the master branch should always be in a working state. To create a new branch:

$ git branch feature-somethingNew

You can now switch to your new feature branch with:

$ git checkout feature-somethingNew

And get back to your master branch with:

$ git checkout master

You can push your new branch back to your fork (origin) with:

$ git push origin feature-somethingNew
Completing work on a feature

When you’re done run the unit tests for your feature branch. Set the debug preference setting (in the app section) to True, and restart psychopy. This will enable access to the test-suite. In debug mode, from the Coder (not Builder) you can now do Ctrl-T / Cmd-T (see Tools menu, Unit Testing) to bring up the unit test window. You can select a subset of tests to run, or run them all.

It’s also possible to run just selected tests, such as doctests within a single file. From a terminal window:

cd psychopy/tests/  #eg /Users/jgray/code/psychopy/psychopy/tests
./run.py path/to/file_with_doctests.py

If the tests pass you hopefully haven’t damaged other parts of PsychoPy (!?). If possible add a unit test for your new feature too, so that if other people make changes they don’t break your work!

You can merge your changes back into your master branch with:

$ git checkout master
$ git merge feature-somethingNew

Merge conflicts happen, and need to be resolved. If you configure your git preferences (~/.gitconfig) to include:

[merge]
    summary = true
    log = true
    tool = opendiff

then you’ll be able to use a handy GUI interface (opendiff) for reviewing differences and conflicts, just by typing:

git mergetool

from the command line after hitting a merge conflict (such as during a git pull upstream master).

Once you’ve folded your new code back into your master and pushed it back to your github fork then it’s time to Share your improvement with others.

Adding documentation

There are several ways to add documentation, all of them useful: doc strings, comments in the code, and demos to show an example of actual usage. To further explain something to end-users, you can create or edit a .rst file that will automatically become formatted for the web, and eventually appear on www.psychopy.org.

You make a new file under psychopy/docs/source/, either as a new file or folder or within an existing one.

To test that your doc source code (.rst file) does what you expect in terms of formatting for display on the web, you can simply do something like (this is my actual path, unlikely to be yours):

$ cd /Users/jgray/code/psychopy/docs/
$ make html

Do this within your docs directory (requires sphinx to be installed, try “easy_install sphinx” if it’s not working). That will add a build/html sub-directory.

Then you can view your new doc in a browser, e.g., for me:

Push your changes to your github repository (using a “DOC:” commit message) and let Jon know, e.g. with a pull request.

Adding a new Builder Component

Builder Components are auto-detected and displayed to the experimenter as icons (builder, right panel). This makes it straightforward to add new ones.

All you need to do is create a list of parameters that the Component needs to know about (that will automatically appear in the Component’s dialog) and a few pieces of code specifying what code should be called at different points in the script (e.g. beginning of the Routine, every frame, end of the study etc...). Many of these will come simply from subclassing the _base or _visual Components.

To get started, Add a new feature branch for the development of this component. (If this doesn’t mean anything to you then see Using the repository )

You’ll mainly be working in the directory .../psychopy/app/builder/components/. Take a look at several existing Components (such as ‘image.py’), and key files including ‘_base.py’ and ‘_visual.py’.

There are three main steps, the first being by far the most involved.

1. File: newcomp.py

It’s pretty straightforward to model a new Component on one of the existing ones. Be prepared to specify what your Component needs to do at several different points in time: before the first trial, every frame, at the end of each routine, and at the end of the experiment. In addition, you may need to sacrifice some complexity in order to keep things streamlined enough for a Builder (see e.g., ratingscale.py).

Your new Component class (in your file newcomp.py) should inherit from BaseComponent (in _base.py), VisualComponent (in _visual.py), or KeyboardComponent (in keyboard.py). You may need to rewrite some or all some of these methods, to override default behavior.:

class NewcompComponent(BaseComponent): # or (VisualComponent)
    def __init__(...):
        super(NewcompComponent, self).__init__(...)
            ...
    def writeInitCode(self, buff):
    def writeRoutineStartCode(self, buff):
    def writeFrameCode(self, buff):
    def writeRoutineEndCode(self, buff):

Calling super() will create the basic default set of params that almost every component will need: name, startVal, startType, etc. Some of these fields may need to be overridden (e.g., durationEstim in sound.py). Inheriting from VisualComponent (which in turn inherits from BaseComponent) adds default visual params, plus arranges for Builder scripts to import psychopy.visual. If your component will need other libs, call self.exp.requirePsychopyLib([‘neededLib’]) (see e.g., parallelPort.py).

At the top of a component file is a dict named _localized. These mappings allow a strict separation of internal string values (= used in logic, never displayed) from values used for display in the Builder interface (= for display only, possibly translated, never used in logic). The .hint and .label fields of params[‘someParam’] should always be set to a localized value, either by using a dict entry such as _localized[‘message’], or via the globally available translation function, _(‘message’). Localized values must not be used elsewhere in a component definition.

Very occasionally, you may also need to edit settings.py, which writes out the set-up code for the whole experiment (e.g., to define the window). For example, this was necessary for ApertureComponent, to pass “allowStencil=True” to the window creation.

Your new Component writes code into a buffer that becomes an executable python file, xxx_lastrun.py (where xxx is whatever the experimenter specifies when saving from the builder, xxx.psyexp). You will do a bunch of this kind of call in your newcomp.py file:

buff.writeIndented(your_python_syntax_string_here)

You have to manage the indentation level of the output code, see experiment.IndentingBuffer().

xxx_lastrun.py is the file that gets built when you run xxx.psyexp from the builder. So you will want to look at xxx_lastrun.py frequently when developing your component.

Name-space

There are several internal variables (er, names of python objects) that have a specific, hardcoded meaning within xxx_lastrun.py. You can expect the following to be there, and they should only be used in the original way (or something will break for the end-user, likely in a mysterious way):

'win' = the window
't' = time within the trial loop, referenced to trialClock
'x', 'y' = mouse coordinates, but only if the experimenter uses a mouse component

Handling of variable names is under active development, so this list may well be out of date. (If so, you might consider updating it or posting a note to psychopy-dev.)

Preliminary testing suggests that there are 600-ish names from numpy or numpy.random, plus the following:

['KeyResponse', '__builtins__', '__doc__', '__file__', '__name__', '__package__', 'buttons', 'core', 'data', 'dlg', 'event', 'expInfo', 'expName', 'filename', 'gui', 'logFile', 'os', 'psychopy', 'sound', 't', 'visual', 'win', 'x', 'y']

Yet other names get derived from user-entered names, like trials –> thisTrial.

Params

self.params is a key construct that you build up in __init__. You need name, startTime, duration, and several other params to be defined or you get errors. ‘name’ should be of type ‘code’.

The Param() class is defined in psychopy.app.builder.experiment.Param(). A very useful thing that Params know is how to create a string suitable for writing into the .py script. In particular, the __str__ representation of a Param will format its value (.val) based on its type (.valType) appropriately. This means that you don’t need to check or handle whether the user entered a plain string, a string with a code trigger character ($), or the field was of type code in the first place. If you simply request the str() representation of the param, it is formatted correctly.

To indicate that a param (eg, thisParam) should be considered as an advanced feature, set its category to advanced: self.params[‘thisParam’].categ = ‘Advanced’. Then the GUI shown to the experimenter will place it on the ‘Advanced’ tab. Other categories work similarly (Custom, etc).

During development, it can sometimes be helpful to save the params into the xxx_lastrun.py file as comments, so I could see what was happening:

def writeInitCode(self,buff):
    # for debugging during Component development:
    buff.writeIndented("# self.params for aperture:\n")
    for p in self.params.keys():
        try: buff.writeIndented("# %s: %s <type %s>\n" % (p, self.params[p].val, self.params[p].valType))
        except: pass

A lot more detail can be inferred from existing components.

Making things loop-compatible looks interesting – see keyboard.py for an example, especially code for saving data at the end.

Notes & gotchas
syntax errors in new_comp.py:
The PsychoPy app will fail to start if there are syntax error in any of the components that are auto-detected. Just correct them and start the app again.
param[].val :

If you have a boolean variable (e.g., my_flag) as one of your params, note that self.param[“my_flag”] is always True (the param exists –> True). So in a boolean context you almost always want the .val part, e.g., if self.param[“my_flag”].val:.

However, you do not always want .val. Specifically, in a string/unicode context (= to trigger the self-formatting features of Param()s), you almost always want “%s” % self.param[‘my_flag’], without .val. Note that it’s better to do this via “%s” than str() because str(self.param[“my_flag”]) coerces things to type str (squashing unicode) whereas %s works for both str and unicode.

2. Icon: newcomp.png

Using your favorite image software, make an icon for your Component with a descriptive name, e.g., ‘newcomp.png’. Dimensions = 48 x 48. Put it in the components directory.

In ‘newcomp.py’, have a line near the top:

iconFile = path.join(thisFolder, 'newcomp.png')
3. Documentation: newcomp.rst

Just make a descriptively-named text file that ends in .rst (“restructured text”), and put it in psychopy/docs/source/builder/components/ . It will get auto-formatted and end up at http://www.psychopy.org/builder/components/newcomp.html

Style-guide for coder demos

Each coder demo is intended to illustrate a key PsychoPy feature (or two), especially in ways that show usage in practice, and go beyond the description in the API. The aim is not to illustrate every aspect, but to get people up to speed quickly, so they understand how basic usage works, and could then play around with advanced features.

As a newcomer to PsychoPy, you are in a great position to judge whether the comments and documentation are clear enough or not. If something is not clear, you may need to ask a PsychoPy contributor for a description; email psychopy-dev@googlegroups.com.

Here are some style guidelines, written for the OpenHatch event(s) but hopefully useful after that too. These are intended specifically for the coder demos, not for the internal code-base (although they are generally quite close).

The idea is to have clean code that looks and works the same way across demos, while leaving the functioning mostly untouched. Some small changes to function might be needed (e.g., to enable the use of ‘escape’ to quit), but typically only minor changes like this.

  • Generally, when you run the demo, does it look good and help you understand the feature? Where might there be room for improvement? You can either leave notes in the code in a comment, or include them in a commit message.

  • Standardize the top stuff to have 1) a shbang with python2 (not just python), 2) utf-8 encoding, and 3) a comment:

    #!/usr/bin/env python2
    # -*- coding: utf-8 -*-
    """Demo name, purpose, description (1-2 sentences, although some demos need more explanation).
    """
    

For the comment / description, it’s a good idea to read and be informed by the relevant parts of the API (see http://psychopy.org/api/api.html), but there’s no need to duplicate that text in your comment. If you are unsure, please post to the dev list psychopy-dev@googlegroups.com.

  • Follow PEP-8 mostly, some exceptions:

    • current PsychoPy convention is to use camelCase for variable names, so don’t convert those to underscores
    • 80 char columns can spill over a little. Try to keep things within 80 chars most of the time.
    • do allow multiple imports on one line if they are thematically related (e.g., import os, sys, glob).
    • inline comments are ok (because the code demos are intended to illustrate and explain usage in some detail, more so than typical code).
  • Check all imports:

    • remove any unnecessary ones
    • replace import time with from psychopy import core. Use core.getTime() (= ms since the script started) or core.getAbsTime() (= seconds, unix-style) instead of time.time(), for all time-related functions or methods not just time().
    • add from __future__ import division, even if not needed. And make sure that doing so does not break the demo!
  • Fix any typos in comments; convert any lingering British spellings to US, e.g., change colour to color

  • Prefer if <boolean>: as a construct instead of if <boolean> == True:. (There might not be any to change).

  • If you have to choose, opt for more verbose but easier-to-understand code instead of clever or terse formulations. This is for readability, especially for people new to python. If you are unsure, please add a note to your commit message, or post a question to the dev list psychopy-dev@googlegroups.com.

  • Standardize variable names:

    • use win for the visual.Window(), and so win.flip()
  • Provide a consistent way for a user to exit a demo using the keyboard, ideally enable this on every visual frame: use if len(event.getKeys([‘escape’]): core.quit(). Note: if there is a previous event.getKeys() call, it can slurp up the ‘escape’ keys. So check for ‘escape’ first.

  • Time-out after 10 seconds, if there’s no user response and a timeout is appropriate for the demo (and a longer time-out might be needed, e.g., for ratingScale.py):

    demoClock = core.clock()  # is demoClock's time is 0.000s at this point
    ...
    if demoClock.getTime() > 10.:
        core.quit()
    
  • Most demos are not full screen. For any that are full-screen, see if it can work without being full screen. If it has to be full-screen, add some text to say that pressing ‘escape’ will quit.

  • If displaying log messages to the console seems to help understand the demo, here’s how to do it:

    from psychopy import logging
    ...
    logging.console.setLevel(logging.INFO)  # or logging.DEBUG for even more stuff
    
  • End a script with win.close() (assuming the script used a visual.Window), and then core.quit() even though it’s not strictly necessary

Localization (I18N, translation)

PsychoPy is used worldwide. Starting with v1.81, many parts of PsychoPy itself (the app) can be translated into any language that has a unicode character set. A translation affects what the experimenter sees while creating and running experiments; it is completely separate from what is shown to the subject. Translations of the online documentation will need a completely different approach.

In the app, translation is handled by a function, _translate(), which takes a string argument. (The standard name is _(), but unfortunately this conflicts with _ as used in some external packages that PsychoPy depends on.) The _translate() function returns a translated, unicode version of the string in the locale / language that was selected when starting the app. If no translation is available for that locale, the original string is returned (= English).

A locale setting (e.g., ‘ja_JP’ for Japanese) allows the end-user (= the experimenter) to control the language that will be used for display within the app itself. (It can potentially control other display conventions as well, not just the language.) PsychoPy will obtain the locale from the user preference (if set), or the OS.

Workflow: 1) Make a translation from English (en_US) to another language. You’ll need a strong understanding of PsychoPy, English, and the other language. 2) In some cases it will be necessary to adjust PsychoPy’s code, but only if new code has been added to the app and that code displays text. Then re-do step 1 to translate the newly added strings.

See notes in psychopy/app/localization/readme.txt.

Make a translation (.po file)

As a translator, you will likely introduce many new people to PsychoPy, and your translations will greatly influence their experience. Try to be completely accurate; it is better to leave something in English if you are unsure how PsychoPy is supposed to work.

To translate a given language, you’ll need to know the standard 5-character code (see psychopy/app/localization/mappings). E.g., for Japanese, wherever LANG appears in the documentation here, you should use the actual code, i.e., “ja_JP” (without quotes).

A free app called poedit is useful for managing a translation. For a given language, the translation mappings (from en_US to LANG) are stored in a .po file (a text file with extension .po); after editing with poedit, these are converted into binary format (with extension .mo) which are used when the app is running.

  • Start translation (do these steps once):

    Start a translation by opening psychopy/app/locale/LANG/LC_MESSAGE/messages.po in Poedit. If there is no such .po file, create a new one:

    • make a new directory psychopy/app/locale/LANG/LC_MESSAGE/ if needed. Your LANG will be auto-detected within PsychoPy only if you follow this convention. You can copy metadata (such as the project name) from another .po file.

    Set your name and e-mail address from “Preferences...” of “File” menu. Set translation properties (such as project name, language and charset) from Catalog Properties Dialog, which can be opened from “Properties...” of “Catalog” menu.

    In poedit’s properties dialog, set the “source keywords” to include ‘_translate’. This allows poedit to find the strings in PsychoPy that are to be translated.

    To add paths where Poedit scans .py files, open “Sources paths” tab on the Catalog Properties Dialog, and set “Base path:” to ”../../../../../” (= psychopy/psychopy/). Nothing more should be needed. If you’ve created new catalog, save your catalog to psychopy/app/locale/LANG/LC_MESSAGE/messages.po.

    Probably not needed, but check anyway: Edit the file containing language code and name mappings, psychopy/app/localization/mappings, and fill in the name for your language. Give a name that should be familiar to people who read that language (i.e., use the name of the language as written in the language itself, not in en_US). About 25 are already done.

  • Edit a translation:

    Open the .po file with Poedit and press “Update” button on the toolbar to update newly added / removed strings that need to be translated. Select a string you want to translate and input your translation to “Translation:” box. If you are unsure where string is used, point on the string in “Source text” box and right-click. You can see where the string is defined.

  • Technical terms should not be translated: Builder, Coder, PsychoPy, Flow, Routine, and so on. (See the Japanese translation for guidance.)

  • If there are formatting arguments in the original string (%s, %(first)i), the same number of arguments must also appear in the translation (but their order is not constrained to be the original order). If they are named (e.g., %(first)i), that part should not be translated–here first is a python name.

  • If you think your translation might have room for improvement, indicate that it is “fuzzy”. (Saving Notes does not work for me on Mac, seems like a bug in poedit.)

  • After making a new translation, saving it in poedit will save the .po file and also make an associated .mo file. You need to update the .mo file if you want to see your changes reflected in PsychoPy.

  • The start-up tips are stored in separate files, and are not translated by poedit. Instead:

  • copy the default version (named psychopy/app/Resources/tips.txt) to a new file in the same directory, named tips_LANG.txt. Then replace English-language tips with translated tips. Note that some of the humor might not translate well, so feel free to leave out things that would be too odd, or include occasional mild humor that would be more appropriate. Humor must be respectful and suitable for using in a classroom, laboratory, or other professional situation. Don’t get too creative here. If you have any doubt, best leave it out. (Hopefully it goes without saying that you should avoid any religious, political, disrespectful, or sexist material.)
  • in poedit, translate the file name: translate “tips.txt” as “tips_LANG.txt”
  • Commit both the .po and .mo files to github (not just one or the other), and any changed files (e.g., tips_LANG, localization/mappings).
Adjust PsychoPy’s code

This is mostly complete (as of 1.81.00), but will be needed for new code that displays text to users of the app (experimenters, not study participants).

There are a few things to keep in mind when working on the app’s code to make it compatible with translations. If you are only making a translation, you can skip this section.

  • In PsychoPy’s code, the language to be used should always be English with American spellings (e.g., “color”).
  • Within the app, the return value from _translate() should be used only for display purposes: in menus, tooltips, etc. A translated value should never be used as part of the logic or internal functioning of PsychoPy. It is purely a “skin”. Internally, everything must be in en_US.
  • Basic usage is exactly what you expect: _translate("hello") will return a unicode string at run-time, using mappings for the current locale as provided by a translator in a .mo file. (Not all translations are available yet, see above to start a new one.) To have the app display a translated string to the experimenter, just display the return value from the underscore translation function.
  • The strings to be translated must appear somewhere in the app code base as explicit strings within _translate(). If you need to translate a variable, e.g., named str_var using the expression _translate(str_var), somewhere else you need to explicitly give all the possible values that str_var can take, and enclose each of them within the translate function. It is okay for that to be elsewhere, even in another file, but not in a comment. This allows poedit to discover of all the strings that need to be translated. (This is one of the purposes of the _localized dict at the top of some modules.)
  • _translate() should not be given a null string to translate; if you use a variable, check that it is not ‘’ to avoid invoking _translate('').
  • Strings that contain formatting placeholders (e.g., %d, %s, %.4f) require a little more thought. Single placeholders are easy enough: _translate("hello, %s") % name.
  • Strings with multiple formatting placeholders require named arguments, because positional arguments are not always sufficient to disambiguate things depending on the phrase and the language to be translated into: _translate("hello, %(first)s %(last)s") % {'first': firstname, 'last': lastname}
  • Localizing drop-down menus is a little more involved. Such menus should display localized strings, but return selected values as integers (GetSelection() returns the position within the list). Do not use GetStringSelection(), because this will return the localized string, breaking the rule about a strict separation of display and logic. See Builder ParamDialogs for examples.
Other notes

When there are more translations (and if they make the app download large) we might want to manage things differently (e.g., have translations as a separate download from the app).

Adding a new Menu Item

Adding a new menu-item to the Builder (or Coder) is relatively straightforward, but there are several files that need to be changed in specific ways.

1. makeMenus()

The code that constructs the menus for the Builder is within a method named makeMenus(), within class builder.BuilderFrame(). Decide which submenu your new command fits under, and look for that section (e.g., File, Edit, View, and so on). For example, to add an item for making the Routine panel items larger, I added two lines within the View menu, by editing the makeMenus() method of class BuilderFrame within psychopy/app/builder/builder.py (similar for Coder):

self.viewMenu.Append(self.IDs.tbIncrRoutineSize, _("&Routine Larger\t%s") %self.app.keys['largerRoutine'], _("Larger routine items"))
wx.EVT_MENU(self, self.IDs.tbIncrRoutineSize, self.routinePanel.increaseSize)

Note the use of the translation function, _(), for translating text that will be displayed to users (menu listing, hint).

2. wxIDs.py

A new item needs to have a (numeric) ID so that wx can keep track of it. Here, the number is self.IDs.tbIncrRoutineSize, which I had to define within the file psychopy/app/wxIDs.py:

tbIncrRoutineSize=180

It’s possible that, instead of hard-coding it like this, it’s better to make a call to wx.NewId() – wx will take care of avoiding duplicate IDs, presumably.

3. Key-binding prefs

I also defined a key to use to as a keyboard short-cut for activating the new menu item:

self.app.keys['largerRoutine']

The actual key is defined in a preference file. Because psychopy is multi-platform, you need to add info to four different .spec files, all of them being within the psychopy/preferences/ directory, for four operating systems (Darwin, FreeBSD, Linux, Windows). For Darwin.spec (meaning Mac OS X), I added two lines. The first line is not merely a comment: it is also automatically used as a tooltip (in the preferences dialog, under key-bindings), and the second being the actual short-cut key to use:

# increase display size of Routines
largerRoutine = string(default='Ctrl++') # on mac book pro this is good

This means that the user has to hold down the Ctrl key and then press the + key. Note that on macs, ‘Ctrl’ in the spec is automatically converted into ‘Cmd’ for the actual key to use; in the .spec, you should always specify things in terms of ‘Ctrl’ (and not ‘Cmd’). The default value is the key-binding to use unless the user defines another one in her or his preferences (which then overrides the default). Try to pick a sensible key for each operating system, and update all four .spec files.

4. Your new method

The second line within makeMenus() adds the key-binding definition into wx’s internal space, so that when the key is pressed, wx knows what to do. In the example, it will call the method self.routinePanel.increaseSize, which I had to define to do the desired behavior when the method is called (in this case, increment an internal variable and redraw the routine panel at the new larger size).

5. Documentation

To let people know that your new feature exists, add a note about your new feature in the CHANGELOG.txt, and appropriate documentation in .rst files.

Happy Coding Folks!!

PsychoPy Experiment file format (.psyexp)

The file format used to save experiments constructed in PsychoPy builder was created especially for the purpose, but is an open format, using a basic xml form, that may be of use to other similar software. Indeed the builder itself could be used to generate experiments on different backends (such as Vision Egg, PsychToolbox or PyEPL). The xml format of the file makes it extremely platform independent, as well as moderately(?!) easy to read by humans. There was a further suggestion to generate an XSD (or similar) schema against which psyexp files could be validated. That is a low priority but welcome addition if you wanted to work on it(!) There is a basic XSD (XML Schema Definition) available in psychopy/app/builder/experiment.xsd.

The simplest way to understand the file format is probably simply to create an experiment, save it and open the file in an xml-aware editor/viewer (e.g. change the file extension from .psyexp to .xml and then open it in Firefox). An example (from the stroop demo) is shown below.

The file format maps fairly obviously onto the structure of experiments constructed with the Builder interface, as described here. There are general Settings for the experiment, then there is a list of Routines and a Flow that describes how these are combined.

As with any xml file the format contains object nodes which can have direct properties and also child nodes. For instance the outermost node of the .psyexp file is the experiment node, with properties that specify the version of PsychoPy that was used to save the file most recently and the encoding of text within the file (ascii, unicode etc.), and with child nodes Settings, Routines and Flow.

Parameters

Many of the nodes described within this xml description of the experiment contain Param entries, representing different parameters of that Component. Nearly all parameter nodes have a name property and a val property. The parameter node with the name “advancedParams” does not have them. Most also have a valType property, which can take values ‘bool’, ‘code’, ‘extendedCode’, ‘num’, ‘str’ and an updates property that specifies whether this parameter is changing during the experiment and, if so, whether it changes ‘every frame’ (of the monitor) or ‘every repeat’ (of the Routine).

Settings

The Settings node contains a number of parameters that, in PsychoPy, would normally be set in the Experiment settings dialog, such as the monitor to be used. This node contains a number of Parameters that map onto the entries in that dialog.

Routines

This node provides a sequence of xml child nodes, each of which describes a Routine. Each Routine contains a number of children, each specifying a Component, such as a stimulus or response collecting device. In the Builder view, the Routines obviously show up as different tabs in the main window and the Components show up as tracks within that tab.

Components

Each Component is represented in the .psyexp file as a set of parameters, corresponding to the entries in the appropriate component dialog box, that completely describe how and when the stimulus should be presented or how and when the input device should be read from. Different Components have slightly different nodes in the xml representation which give rise to different sets of parameters. For instance the TextComponent nodes has parameters such as colour and font, whereas the KeyboardComponent node has parameters such as forceEndTrial and correctIf.

Flow

The Flow node is rather more simple. Its children simply specify objects that occur in a particular order in time. A Routine described in this flow must exist in the list of Routines, since this is where it is fully described. One Routine can occur once, more than once or not at all in the Flow. The other children that can occur in a Flow are LoopInitiators and LoopTerminators which specify the start and endpoints of a loop. All loops must have exactly one initiator and one terminator.

Names

For the experiment to generate valid PsychoPy code the name parameters of all objects (Components, Loops and Routines) must be unique and contain no spaces. That is, an experiment can not have two different Routines called ‘trial’, nor even a Routine called ‘trial’ and a Loop called ‘trial’.

The Parameter names belonging to each Component (or the Settings node) must be unique within that Component, but can be identical to parameters of other Components or can match the Component name themselves. A TextComponent should not, for example, have multiple ‘pos’ parameters, but other Components generally will, and a Routine called ‘pos’ would also be also permissible.

<PsychoPy2experiment version="1.50.04" encoding="utf-8">
  <Settings>
    <Param name="Monitor" val="testMonitor" valType="str" updates="None"/>
    <Param name="Window size (pixels)" val="[1024, 768]" valType="code" updates="None"/>
    <Param name="Full-screen window" val="True" valType="bool" updates="None"/>
    <Param name="Save log file" val="True" valType="bool" updates="None"/>
    <Param name="Experiment info" val="{'participant':'s_001', 'session':001}" valType="code" updates="None"/>
    <Param name="Show info dlg" val="True" valType="bool" updates="None"/>
    <Param name="logging level" val="warning" valType="code" updates="None"/>
    <Param name="Units" val="norm" valType="str" updates="None"/>
    <Param name="Screen" val="1" valType="num" updates="None"/>
  </Settings>
  <Routines>
    <Routine name="trial">
      <TextComponent name="word">
        <Param name="name" val="word" valType="code" updates="constant"/>
        <Param name="text" val="thisTrial.text" valType="code" updates="set every repeat"/>
        <Param name="colour" val="thisTrial.rgb" valType="code" updates="set every repeat"/>
        <Param name="ori" val="0" valType="code" updates="constant"/>
        <Param name="pos" val="[0, 0]" valType="code" updates="constant"/>
        <Param name="times" val="[0.5,2.0]" valType="code" updates="constant"/>
        <Param name="letterHeight" val="0.2" valType="code" updates="constant"/>
        <Param name="colourSpace" val="rgb" valType="code" updates="constant"/>
        <Param name="units" val="window units" valType="str" updates="None"/>
        <Param name="font" val="Arial" valType="str" updates="constant"/>
      </TextComponent>
      <KeyboardComponent name="resp">
        <Param name="storeCorrect" val="True" valType="bool" updates="constant"/>
        <Param name="name" val="resp" valType="code" updates="None"/>
        <Param name="forceEndTrial" val="True" valType="bool" updates="constant"/>
        <Param name="times" val="[0.5,2.0]" valType="code" updates="constant"/>
        <Param name="allowedKeys" val="['1','2','3']" valType="code" updates="constant"/>
        <Param name="storeResponseTime" val="True" valType="bool" updates="constant"/>
        <Param name="correctIf" val="resp.keys==str(thisTrial.corrAns)" valType="code" updates="constant"/>
        <Param name="store" val="last key" valType="str" updates="constant"/>
      </KeyboardComponent>
    </Routine>
    <Routine name="instruct">
      <TextComponent name="instrText">
        <Param name="name" val="instrText" valType="code" updates="constant"/>
        <Param name="text" val="&quot;Please press;&#10;1 for red ink,&#10;2 for green ink&#10;3 for blue ink&#10;(Esc will quit)&#10;&#10;Any key to continue&quot;" valType="code" updates="constant"/>
        <Param name="colour" val="[1, 1, 1]" valType="code" updates="constant"/>
        <Param name="ori" val="0" valType="code" updates="constant"/>
        <Param name="pos" val="[0, 0]" valType="code" updates="constant"/>
        <Param name="times" val="[0, 10000]" valType="code" updates="constant"/>
        <Param name="letterHeight" val="0.1" valType="code" updates="constant"/>
        <Param name="colourSpace" val="rgb" valType="code" updates="constant"/>
        <Param name="units" val="window units" valType="str" updates="None"/>
        <Param name="font" val="Arial" valType="str" updates="constant"/>
      </TextComponent>
      <KeyboardComponent name="ready">
        <Param name="storeCorrect" val="False" valType="bool" updates="constant"/>
        <Param name="name" val="ready" valType="code" updates="None"/>
        <Param name="forceEndTrial" val="True" valType="bool" updates="constant"/>
        <Param name="times" val="[0, 10000]" valType="code" updates="constant"/>
        <Param name="allowedKeys" val="" valType="code" updates="constant"/>
        <Param name="storeResponseTime" val="False" valType="bool" updates="constant"/>
        <Param name="correctIf" val="resp.keys==str(thisTrial.corrAns)" valType="code" updates="constant"/>
        <Param name="store" val="last key" valType="str" updates="constant"/>
      </KeyboardComponent>
    </Routine>
    <Routine name="thanks">
      <TextComponent name="thanksText">
        <Param name="name" val="thanksText" valType="code" updates="constant"/>
        <Param name="text" val="&quot;Thanks!&quot;" valType="code" updates="constant"/>
        <Param name="colour" val="[1, 1, 1]" valType="code" updates="constant"/>
        <Param name="ori" val="0" valType="code" updates="constant"/>
        <Param name="pos" val="[0, 0]" valType="code" updates="constant"/>
        <Param name="times" val="[1.0, 2.0]" valType="code" updates="constant"/>
        <Param name="letterHeight" val="0.2" valType="code" updates="constant"/>
        <Param name="colourSpace" val="rgb" valType="code" updates="constant"/>
        <Param name="units" val="window units" valType="str" updates="None"/>
        <Param name="font" val="arial" valType="str" updates="constant"/>
      </TextComponent>
    </Routine>
  </Routines>
  <Flow>
    <Routine name="instruct"/>
    <LoopInitiator loopType="TrialHandler" name="trials">
      <Param name="endPoints" val="[0, 1]" valType="num" updates="None"/>
      <Param name="name" val="trials" valType="code" updates="None"/>
      <Param name="loopType" val="random" valType="str" updates="None"/>
      <Param name="nReps" val="5" valType="num" updates="None"/>
      <Param name="trialList" val="[{'text': 'red', 'rgb': [1, -1, -1], 'congruent': 1, 'corrAns': 1}, {'text': 'red', 'rgb': [-1, 1, -1], 'congruent': 0, 'corrAns': 1}, {'text': 'green', 'rgb': [-1, 1, -1], 'congruent': 1, 'corrAns': 2}, {'text': 'green', 'rgb': [-1, -1, 1], 'congruent': 0, 'corrAns': 2}, {'text': 'blue', 'rgb': [-1, -1, 1], 'congruent': 1, 'corrAns': 3}, {'text': 'blue', 'rgb': [1, -1, -1], 'congruent': 0, 'corrAns': 3}]" valType="str" updates="None"/>
      <Param name="trialListFile" val="/Users/jwp...troop/trialTypes.csv" valType="str" updates="None"/>
    </LoopInitiator>
    <Routine name="trial"/>
    <LoopTerminator name="trials"/>
    <Routine name="thanks"/>
  </Flow>
</PsychoPy2experiment>

Indices

Reference Manual (API)

Contents:

psychopy.core - basic functions (clocks etc.)

Basic functions, including timing, rush (imported), quit

psychopy.core.getTime()

Get the current time since psychopy.core was loaded.

Version Notes: Note that prior to PsychoPy 1.77.00 the behaviour of getTime() was platform dependent (on OSX and linux it was equivalent to psychopy.core.getAbsTime() whereas on windows it returned time since loading of the module, as now)

psychopy.core.getAbsTime()

Return unix time (i.e., whole seconds elapsed since Jan 1, 1970).

This uses the same clock-base as the other timing features, like getTime(). The time (in seconds) ignores the time-zone (like time.time() on linux). To take the timezone into account, use int(time.mktime(time.gmtime())).

Absolute times in seconds are especially useful to add to generated file names for being unique, informative (= a meaningful time stamp), and because the resulting files will always sort as expected when sorted in chronological, alphabetical, or numerical order, regardless of locale and so on.

Version Notes: This method was added in PsychoPy 1.77.00

psychopy.core.wait(secs, hogCPUperiod=0.2)

Wait for a given time period.

If secs=10 and hogCPU=0.2 then for 9.8s python’s time.sleep function will be used, which is not especially precise, but allows the cpu to perform housekeeping. In the final hogCPUperiod the more precise method of constantly polling the clock is used for greater precision.

If you want to obtain key-presses during the wait, be sure to use pyglet and to hogCPU for the entire time, and then call psychopy.event.getKeys() after calling wait()

If you want to suppress checking for pyglet events during the wait, do this once::
core.checkPygletDuringWait = False
and from then on you can do::
core.wait(sec)

This will preserve terminal-window focus during command line usage.

class psychopy.core.Clock

A convenient class to keep track of time in your experiments. You can have as many independent clocks as you like (e.g. one to time responses, one to keep track of stimuli...)

This clock is identical to the MonotonicClock except that it can also be reset to 0 or another value at any point.

add(t)

Add more time to the clock’s ‘start’ time (t0).

Note that, by adding time to t0, you make the current time appear less. Can have the effect that getTime() returns a negative number that will gradually count back up to zero.

e.g.:

timer = core.Clock()
timer.add(5)
while timer.getTime()<0:
    #do something
reset(newT=0.0)

Reset the time on the clock. With no args time will be set to zero. If a float is received this will be the new time on the clock

class psychopy.core.CountdownTimer(start=0)

Similar to a Clock except that time counts down from the time of last reset

Typical usage:

timer = core.CountdownTimer(5)
while timer.getTime() > 0:  # after 5s will become negative
    #do stuff
getTime()

Returns the current time left on this timer in secs (sub-ms precision)

class psychopy.core.MonotonicClock(start_time=None)

A convenient class to keep track of time in your experiments using a sub-millisecond timer.

Unlike the Clock this cannot be reset to arbitrary times. For this clock t=0 always represents the time that the clock was created.

Don’t confuse this class with core.monotonicClock which is an instance of it that got created when PsychoPy.core was imported. That clock instance is deliberately designed always to return the time since the start of the study.

Version Notes: This class was added in PsychoPy 1.77.00

getLastResetTime()

Returns the current offset being applied to the high resolution timebase used by Clock.

getTime()

Returns the current time on this clock in secs (sub-ms precision)

class psychopy.core.StaticPeriod(screenHz=None, win=None, name='StaticPeriod')

A class to help insert a timing period that includes code to be run.

Typical usage:

fixation.draw()
win.flip()
ISI = StaticPeriod(screenHz=60)
ISI.start(0.5) #start a period of 0.5s
stim.image = 'largeFile.bmp' #could take some time
ISI.complete() #finish the 0.5s, taking into account one 60Hz frame

stim.draw()
win.flip() #the period takes into account the next frame flip
#time should now be at exactly 0.5s later than when ISI.start() was called
Parameters:
  • screenHz – the frame rate of the monitor (leave as None if you don’t want this accounted for)
  • name – if a visual.Window is given then StaticPeriod will also pause/restart frame interval recording
  • name – give this StaticPeriod a name for more informative logging messages
complete()

Completes the period, using up whatever time is remaining with a call to wait()

Returns:1 for success, 0 for fail (the period overran)
start(duration)

Start the period. If this is called a second time, the timer will be reset and starts again

psychopy.visual - many visual stimuli

Window to display all stimuli below.

Aperture

BufferImageStim

Attributes
BufferImageStim
BufferImageStim.win
BufferImageStim.buffer
BufferImageStim.rect
BufferImageStim.stim
BufferImageStim.mask
BufferImageStim.units
BufferImageStim.sf
BufferImageStim.pos
BufferImageStim.ori
BufferImageStim.size
BufferImageStim.contrast
BufferImageStim.color
BufferImageStim.colorSpace
BufferImageStim.opacity
BufferImageStim.interpolate
BufferImageStim.name
BufferImageStim.autoLog
BufferImageStim.draw
BufferImageStim.autoDraw
Details

Circle

CustomMouse

DotStim

ElementArrayStim

GratingStim

Attributes
GratingStim
GratingStim.win
GratingStim.tex
GratingStim.mask
GratingStim.units
GratingStim.sf
GratingStim.pos
GratingStim.ori
GratingStim.size
GratingStim.contrast
GratingStim.color
GratingStim.colorSpace
GratingStim.opacity
GratingStim.interpolate
GratingStim.texRes
GratingStim.name
GratingStim.autoLog
GratingStim.draw
GratingStim.autoDraw
Details

Helper functions

ImageStim

As of PsychoPy version 1.79.00 some of the properties for this stimulus can be set using the syntax:

stim.pos = newPos

others need to be set with the older syntax:

stim.setImage(newImage)
Attributes
ImageStim
ImageStim.win
ImageStim.setImage
ImageStim.setMask
ImageStim.units
ImageStim.pos
ImageStim.ori
ImageStim.size
ImageStim.contrast
ImageStim.color
ImageStim.colorSpace
ImageStim.opacity
ImageStim.interpolate
ImageStim.contains
ImageStim.overlaps
ImageStim.name
ImageStim.autoLog
ImageStim.draw
ImageStim.autoDraw
ImageStim.clearTextures
Details

Line

MovieStim

Attributes
MovieStim
MovieStim.win
MovieStim.mask
MovieStim.units
MovieStim.pos
MovieStim.ori
MovieStim.size
MovieStim.opacity
MovieStim.name
MovieStim.autoLog
MovieStim.draw
MovieStim.autoDraw
MovieStim.loadMovie
MovieStim.play
MovieStim.seek
MovieStim.pause
MovieStim.stop
MovieStim.setFlipHoriz
MovieStim.setFlipVert
Details

PatchStim (deprecated)

Polygon

RadialStim

Attributes
RadialStim
RadialStim.win
RadialStim.tex
RadialStim.mask
RadialStim.units
RadialStim.pos
RadialStim.ori
RadialStim.size
RadialStim.contrast
RadialStim.color
RadialStim.colorSpace
RadialStim.opacity
RadialStim.interpolate
RadialStim.setAngularCycles
RadialStim.setAngularPhase
RadialStim.setRadialCycles
RadialStim.setRadialPhase
RadialStim.name
RadialStim.autoLog
RadialStim.draw
RadialStim.autoDraw
RadialStim.clearTextures
Details

RatingScale

Rect

ShapeStim

Attributes
ShapeStim
ShapeStim.win
ShapeStim.units
ShapeStim.vertices
ShapeStim.closeShape
ShapeStim.pos
ShapeStim.ori
ShapeStim.size
ShapeStim.contrast
ShapeStim.lineColor
ShapeStim.lineColorSpace
ShapeStim.fillColor
ShapeStim.fillColorSpace
ShapeStim.opacity
ShapeStim.interpolate
ShapeStim.name
ShapeStim.autoLog
ShapeStim.draw
ShapeStim.autoDraw
Details

SimpleImageStim

TextStim

Window

psychopy.visual.windowframepack - Pack multiple monochrome images into RGB frame

ProjectorFramePacker

psychopy.visual.windowwarp - warping to spherical, cylindrical, or other projections

Warper

Commonly used:

  • ImageStim to show images
  • TextStim to show texts

Shapes (all special classes of ShapeStim):

  • ShapeStim to draw shapes with arbitrary numbers of vertices
  • Rect to show rectangles
  • Circle to show circles
  • Polygon to show polygons
  • Line to show a line

Images and patterns:

  • ImageStim to show images
  • SimpleImageStim to show images without bells and whistles
  • GratingStim to show gratings
  • RadialStim to show annulus, a rotating wedge, a checkerboard etc

Multiple stimuli:

  • ElementArrayStim to show many stimuli of the same type
  • DotStim to show and control movement of dots

Other stimuli:

  • MovieStim to show movies
  • RatingScale to collect ratings
  • CustomMouse to change the cursor in windows with GUI. OBS: will be depricated soon

General purpose (applies to other stimuli):

  • BufferImageStim to make a faster-to-show “screenshot” of other stimuli
  • Aperture to restrict visibility area of other stimuli

See also Helper functions

psychopy.data - functions for storing/saving/analysing data

ExperimentHandler

TrialHandler

StairHandler

MultiStairHandler

QuestHandler

FitWeibull

FitLogistic

FitNakaRushton

FitCumNormal

importConditions()

functionFromStaircase()

bootStraps()

Encryption

Some labs may wish to better protect their data from casual inspection or accidental disclosure. This is possible within PsychoPy using a separate python package, pyFileSec, which grew out of PsychoPy. pyFileSec is distributed with the StandAlone versions of PsychoPy, or can be installed using pip or easy_install via https://pypi.python.org/pypi/PyFileSec/

Some elaboration of pyFileSec usage and security strategy can be found here: http://pythonhosted.org//PyFileSec

Basic usage is illustrated in the Coder demo > misc > encrypt_data.py

psychopy.event - for keypresses and mouse clicks

psychopy.filters - helper functions for creating filters

psychopy.gui - create dialogue boxes

DlgFromDict

Dlg

fileOpenDlg

fileSaveDlg

psychopy.hardware - hardware interfaces

PsychoPy can access a wide range of external hardware. For some devices the interface has already been created in the following sub-packages of PsychoPy. For others you may need to write the code to access the serial port etc. manually.

Contents:

Cedrus (response boxes)

The pyxid package, written by Cedrus, is included in the Standalone PsychoPy distributions. See https://github.com/cedrus-opensource/pyxid for further info.

Example usage:

    import pyxid

# get a list of all attached XID devices
devices = pyxid.get_xid_devices()

dev = devices[0] # get the first device to use
if dev.is_response_device():
    dev.reset_base_timer()
    dev.reset_rt_timer()

    while True:
        dev.poll_for_response()
        if dev.response_queue_size() > 0:
            response = dev.get_next_response()
            # do something with the response
Useful functions
Device classes

Cambridge Research Systems Ltd.

For stimulus display
BitsPlusPlus

Control a CRS Bits# device. See typical usage in the class summary (and in the menu demos>hardware>BitsBox of PsychoPy’s Coder view).

Important: See note on BitsPlusPlusIdentityLUT

Attributes
BitsPlusPlus
BitsPlusPlus.mode
BitsPlusPlus.setContrast
BitsPlusPlus.setGamma
BitsPlusPlus.setLUT
Details
Finding the identity LUT

For the Bits++ (and related) devices to work correctly it is essential that the graphics card is not altering in any way the values being passed to the monitor (e.g. by gamma correcting). It turns out that finding the ‘identity’ LUT, where exactly the same values come out as were put in, is not trivial. The obvious LUT would have something like 0/255, 1/255, 2/255... in entry locations 0,1,2... but unfortunately most graphics cards on most operating systems are ‘broken’ in one way or another, with rounding errors and incorrect start points etc.

PsychoPy provides a few of the common variants of LUT and that can be chosen when you initialise the device using the parameter rampType. If no rampType is specified then PsychoPy will choose one for you:

from psychopy import visual
from psychopy.hardware import crs

win = visual.Window([1024,768], useFBO=True) #we need to be rendering to framebuffer
bits = crs.BitsPlusPlus(win, mode = 'bits++', rampType = 1)

The Bits# is capable of reporting back the pixels in a line and this can be used to test that a particular LUT is indeed providing identity values. If you have previously connected a BitsSharp device and used it with PsychoPy then a file will have been stored with a LUT that has been tested with that device. In this case set rampType = “configFile” for PsychoPy to use it if such a file is found.

BitsSharp

Control a CRS Bits# device. See typical usage in the class summary (and in the menu demos>hardware>BitsBox of PsychoPy’s Coder view).

Attributes
BitsSharp
BitsSharp.mode
BitsSharp.isAwake
BitsSharp.getInfo
BitsSharp.checkConfig
BitsSharp.gammaCorrectFile
BitsSharp.temporalDithering
BitsSharp.monitorEDID
BitsSharp.beep
BitsSharp.getVideoLine
BitsSharp.start
BitsSharp.stop

Direct communications with the serial port:

BitsSharp.sendMessage
BitsSharp.getResponse

Control the CLUT (Bits++ mode only):

BitsSharp.setContrast
BitsSharp.setGamma
BitsSharp.setLUT
Details
For display calibration
ColorCAL
Attributes
ColorCAL
Details

egi (pynetstation)

Interface to EGI Netstation

This is currently a simple import of pynetstation That needs to be installed (but is included in the Standalone distributions of PsychoPy as of version 1.62.01).

installation:

Download the package from the link above and copy egi.py into your site-packages directory.

usage:

from psychopy.hardware import egi

For an example see the demos menu of the PsychoPy Coder For further documentation see the pynetstation website

Launch an fMRI experiment: Test or Scan

fORP response box

iolab

joystick (pyglet and pygame)

labjack (USB I/O devices)

The labjack package is included in the Standalone PsychoPy distributions. It differs slightly from the standard version distributed by labjack (www.labjack.com) in the import. For the custom distribution use:

from labjack import u3

NOT:

import u3

In all other respects the library is the same and instructions on how to use it can be found here:

http://labjack.com/support/labjackpython

Note

To use labjack devices you do need also to install the driver software described on the page above

Minolta

Minolta light-measuring devices See http://www.konicaminolta.com/instruments


class psychopy.hardware.minolta.LS100(port, maxAttempts=1)

A class to define a Minolta LS100 (or LS110?) photometer

You need to connect a LS100 to the serial (RS232) port and when you turn it on press the F key on the device. This will put it into the correct mode to communicate with the serial port.

usage:

from psychopy.hardware import minolta
phot = minolta.LS100(port)
if phot.OK:#then we successfully made a connection and can send/receive
    print phot.getLum()
Parameters:

port: string

the serial port that should be checked

maxAttempts: int

If the device doesn’t respond first time how many attempts should be made? If you’re certain that this is the correct port and the device is on and correctly configured then this could be set high. If not then set this low.

Troubleshooting:
 

Various messages are printed to the log regarding the function of this device, but to see them you need to set the printing of the log to the correct level:

from psychopy import logging
logging.console.setLevel(logging.ERROR)#error messages only
logging.console.setLevel(logging.INFO)#will give a little more info
logging.console.setLevel(logging.DEBUG)#will export a log of all communications

If you’re using a keyspan adapter (at least on OS X) be aware that it needs a driver installed. Otherwise no ports wil be found.

Error messages:

ERROR: Couldn't connect to Minolta LS100/110 on ____:

This likely means that the device is not connected to that port (although the port has been found and opened). Check that the device has the [ in the bottom right of the display; if not turn off and on again holding the F key.

ERROR: No reply from LS100:

The port was found, the connection was made and an initial command worked, but then the device stopped communating. If the first measurement taken with the device after connecting does not yield a reasonble intensity the device can sulk (not a technical term!). The “[” on the display will disappear and you can no longer communicate with the device. Turn it off and on again (with F depressed) and use a reasonably bright screen for your first measurement. Subsequent measurements can be dark (or we really would be in trouble!!).

checkOK(msg)

Check that the message from the photometer is OK. If there’s an error print it.

Then return True (OK) or False.

clearMemory()

Clear the memory of the device from previous measurements

getLum()

Makes a measurement and returns the luminance value

measure()

Measure the current luminance and set .lastLum to this value

sendMessage(message, timeout=5.0)

Send a command to the photometer and wait an alloted timeout for a response.

setMaxAttempts(maxAttempts)

Changes the number of attempts to send a message and read the output Typically this should be low initially, if you aren’t sure that the device is setup correctly but then, after the first successful reading, set it higher.

setMode(mode='04')

Set the mode for measurements. Returns True (success) or False

‘04’ means absolute measurements. ‘08’ = peak ‘09’ = cont

See user manual for other modes

PhotoResearch

Supported devices:

  • PR650
  • PR655/PR670
psychopy.hardware.findPhotometer(ports=None, device=None)

Try to find a connected photometer/photospectrometer! PsychoPy will sweep a series of serial ports trying to open them. If a port successfully opens then it will try to issue a command to the device. If it responds with one of the expected values then it is assumed to be the appropriate device.

Parameters:
ports : a list of ports to search

Each port can be a string (e.g. ‘COM1’, ‘’/dev/tty.Keyspan1.1’) or a number (for win32 comports only). If none are provided then PsychoPy will sweep COM0-10 on win32 and search known likely port names on OS X and linux.

device : string giving expected device (e.g. ‘PR650’, ‘PR655’, ‘LS110’).

If this is not given then an attempt will be made to find a device of any type, but this often fails

Returns:

  • An object representing the first photometer found
  • None if the ports didn’t yield a valid response
  • None if there were not even any valid ports (suggesting a driver not being installed)

e.g.:

photom = findPhotometer(device='PR655') #sweeps ports 0 to 10 searching for a PR655
print photom.getLum()
if hasattr(photom, 'getSpectrum'):#can retrieve spectrum (e.g. a PR650)
    print photom.getSpectrum()

psychopy.info - functions for getting information about the system

psychopy.iohub - ioHub event monitoring framework

ioHub monitors for device events in parallel with the PsychoPy experiment execution by running in a separate process than the main PsychoPy script. This means, for instance, that keyboard and mouse event timing is not quantized by the rate at which the window.swap() method is called.

ioHub reports device events to the PsychoPy experiment runtime as they occur. Optionally, events can be saved to a HDF5 file.

All iohub events are timestamped using the PsychoPy global time base (psychopy.core.getTime()). Events can be accessed as a device independent event stream, or from a specific device of interest.

A comprehensive set of examples that each use at least one of the iohub devices is available in the psychopy/demos/coder/iohub folder.

Note

This documentation is in very early stages of being written. Many sections regarding device usage details are simply placeholders. For information on devices or functionality that has not yet been migrated to the psychopy documentation, please visit the somewhat outdated original ioHub doc’s.

Using psychopy.iohub:

psychopy.iohub Specific Requirements
Computer Specifications

The design / requirements of your experiment itself can obviously influence what the minimum computer specification should be to provide good timing / performance.

The dual process design when running using psychopy.iohub also influences the minimum suggested specifications as follows:

  • Intel i5 or i7 CPU. A minimum of two CPU cores is needed.
  • 8 GB of RAM
  • Windows 7 +, OS X 10.7.5 +, or Linux Kernel 2.6 +

Please see the Recommended hardware section for further information that applies to PsychoPy in general.

Usage Considerations

When using psychopy.iohub, the following constrains should be noted:

  1. The pyglet graphics backend must be used; pygame is not supported.
  2. ioHub devices that report position data use the unit type defined by the PsychoPy Window. However, position data is reported using the full screen area and size the window was created in. Therefore, for accurate window position reporting, the PsychoPy window must be made full screen.
  3. On OS X, Assistive Device support must be enabled when using psychopy.iohub.
    • For OS X 10.7 - 10.8.5, instructions can be found here.
    • For OS X 10.9 +, the program being used to start your experiment script must be specifically authorized. Example instructions on authorizing an OS X 10.9 + app can be viewed here.
Software Requirements

When running PsychoPy using the OS X or Windows standalone distribution, all the necessary python package dependencies have already been installed, so the rest of this section can be skipped.

Note

Hardware specific software may need to be installed depending on the device being used. See the documentation page for the specific device hardware in question for further details.

If psychopy.iohub is being manually installed, first ensure the python packages listed in the Dependencies section of the manual are installed.

psychopy.iohub requires the following extra dependencies to be installed:

  1. psutil (version 1.2 +) A cross-platform process and system utilities module for Python.
  2. msgpack It’s like JSON. but fast and small.
  3. greenlet The greenlet package is a spin-off of Stackless, a version of CPython that supports micro-threads called “tasklets”.
  4. gevent (version 1.0 or greater)** A coroutine-based Python networking library.
  5. numexpr Fast numerical array expression evaluator for Python and NumPy.
  6. pytables PyTables is a package for managing hierarchical datasets.
  7. pyYAML PyYAML is a YAML parser and emitter for Python.
Windows installations only
  1. pyHook Python wrapper for global input hooks in Windows.
Linux installations only
  1. python-xlib The Python X11R6 client-side implementation.
OSX installations only
  1. pyobjc : A Python ObjectiveC binding.
Starting the psychopy.iohub Process

To use ioHub within your PsychoPy Coder experiment script, ioHub needs to be started at the start of the experiment script. The easiest way to do this is by calling the launchHubServer function.

launchHubServer function
ioHubConnection Class

The psychopy.iohub.ioHubConnection object returned from the launchHubServer function provides methods for controlling the iohub process and accessing iohub devices and events.

ioHub Devices and Device Events

psychopy.iohub supports a large and growing set of supported devices. Details for each device can be found in the following sections.

Keyboard Device
The iohub Keyboard device provides methods to:
  • Check for any new keyboard events that have occurred since the last time keyboard events were checked or cleared.
  • Wait until a keyboard event occurs.
  • Clear the device of any unread events.
  • Get a list of all currently pressed keys.
Keyboard Events

The Keyboard device can return two types of events, which represent key press and key release actions on the keyboard.

KeyboardPress Event
KeyboardRelease Event
Mouse Device and Events

TBC

Computer Device

TBC

XInput Gamepad Device and Events

TBC

Eye Tracker Devices and Events

TBC

Serial Port Device and Events

TBC

Analog Input Device and Events

TBC

Touch Screen Device and Events

TBC

psychopy.logging - control what gets logged

Provides functions for logging error and other messages to one or more files and/or the console, using python’s own logging module. Some warning messages and error messages are generated by PsychoPy itself. The user can generate more using the functions in this module.

There are various levels for logged messages with the following order of importance: ERROR, WARNING, DATA, EXP, INFO and DEBUG.

When setting the level for a particular log target (e.g. LogFile) the user can set the minimum level that is required for messages to enter the log. For example, setting a level of INFO will result in INFO, EXP, DATA, WARNING and ERROR messages to be recorded but not DEBUG messages.

By default, PsychoPy will record messages of WARNING level and above to the console. The user can silence that by setting it to receive only CRITICAL messages, (which PsychoPy doesn’t use) using the commands:

from psychopy import logging
logging.console.setLevel(logging.CRITICAL)
class psychopy.logging.LogFile(f=None, level=30, filemode='a', logger=None, encoding='utf8')

A text stream to receive inputs from the logging system

Create a log file as a target for logged entries of a given level

Parameters:
  • f:

    this could be a string to a path, that will be created if it doesn’t exist. Alternatively this could be a file object, sys.stdout or any object that supports .write() and .flush() methods

  • level:

    The minimum level of importance that a message must have to be logged by this target.

  • mode: ‘a’, ‘w’

    Append or overwrite existing log file

setLevel(level)

Set a new minimal level for the log file/stream

write(txt)

Write directy to the log file (without using logging functions). Useful to send messages that only this file receives

psychopy.logging.addLevel(level, levelName)

Associate ‘levelName’ with ‘level’.

This is used when converting levels to text during message formatting.

psychopy.logging.critical(message)

Send the message to any receiver of logging info (e.g. a LogFile) of level log.CRITICAL or higher

psychopy.logging.data(msg, t=None, obj=None)

Log a message about data collection (e.g. a key press)

usage::
log.data(message)

Sends the message to any receiver of logging info (e.g. a LogFile) of level log.DATA or higher

psychopy.logging.debug(msg, t=None, obj=None)

Log a debugging message (not likely to be wanted once experiment is finalised)

usage::
log.debug(message)

Sends the message to any receiver of logging info (e.g. a LogFile) of level log.DEBUG or higher

psychopy.logging.error(message)

Send the message to any receiver of logging info (e.g. a LogFile) of level log.ERROR or higher

psychopy.logging.exp(msg, t=None, obj=None)

Log a message about the experiment (e.g. a new trial, or end of a stimulus)

usage::
log.exp(message)

Sends the message to any receiver of logging info (e.g. a LogFile) of level log.EXP or higher

psychopy.logging.fatal(msg, t=None, obj=None)

log.critical(message) Send the message to any receiver of logging info (e.g. a LogFile) of level log.CRITICAL or higher

psychopy.logging.flush(logger=<psychopy.logging._Logger instance>)

Send current messages in the log to all targets

psychopy.logging.getLevel(level)

Return the textual representation of logging level ‘level’.

If the level is one of the predefined levels (CRITICAL, ERROR, WARNING, INFO, DEBUG) then you get the corresponding string. If you have associated levels with names using addLevelName then the name you have associated with ‘level’ is returned.

If a numeric value corresponding to one of the defined levels is passed in, the corresponding string representation is returned.

Otherwise, the string “Level %s” % level is returned.

psychopy.logging.info(msg, t=None, obj=None)

Log some information - maybe useful, maybe not

usage::
log.info(message)

Sends the message to any receiver of logging info (e.g. a LogFile) of level log.INFO or higher

psychopy.logging.log(msg, level, t=None, obj=None)

Log a message

usage::
log(level, msg, t=t, obj=obj)

Log the msg, at a given level on the root logger

psychopy.logging.setDefaultClock(clock)

Set the default clock to be used to reference all logging times. Must be a psychopy.core.Clock object. Beware that if you reset the clock during the experiment then the resets will be reflected here. That might be useful if you want your logs to be reset on each trial, but probably not.

psychopy.logging.warn(msg, t=None, obj=None)

log.warning(message)

Sends the message to any receiver of logging info (e.g. a LogFile) of level log.WARNING or higher

psychopy.logging.warning(message)

Sends the message to any receiver of logging info (e.g. a LogFile) of level log.WARNING or higher

flush()

psychopy.logging.flush(logger=<psychopy.logging._Logger instance>)

Send current messages in the log to all targets

setDefaultClock()

psychopy.logging.setDefaultClock(clock)

Set the default clock to be used to reference all logging times. Must be a psychopy.core.Clock object. Beware that if you reset the clock during the experiment then the resets will be reflected here. That might be useful if you want your logs to be reset on each trial, but probably not.

psychopy.microphone - Capture and analyze sound

(Available as of version 1.74.00; Advanced features available as of 1.77.00)

Overview

AudioCapture() allows easy audio recording and saving of arbitrary sounds to a file (wav format). AudioCapture will likely be replaced entirely by AdvAudioCapture in the near future.

AdvAudioCapture() can do everything AudioCapture does, and also allows onset-marker sound insertion and detection, loudness computation (RMS audio “power”), and lossless file compression (flac). The Builder microphone component now uses AdvAudioCapture by default.

Speech2Text() provides speech recognition (courtesy of google), with about 1-2 seconds latency for a 2 sec voice recording. Note that the sound files are sent to google over the internet. Intended for within-experiment processing (near real-time, 1-2s delayed), in which priority is given to keeping an experiment session moving along, even if that means skipping a slow response once in a while. See coder demo > input > speech_recognition.py.

Eventually, other features are planned, including: speech onset detection (to automatically estimate vocal RT for a given speech sample), and interactive visual inspection of sound waveform, with playback and manual onset determination (= the “gold standard” for RT).

Audio Capture

Speech recognition

Misc

PsychoPy provides lossless compression using FLAC codec. (This requires that flac is installed on your computer. It is not included with PsychoPy by default, but you can download for free from http://xiph.org/flac/ ). Functions for file-oriented Discrete Fourier Transform and RMS computation are also provided.

psychopy.misc - miscellaneous routines for converting units etc

psychopy.misc has gradually grown very large and the underlying code for its functions are distributed in multiple files. You can still (at least for now) import the functions here using from psychopy import misc but you can also import them from the tools sub-modules.

From psychopy.tools.filetools

toFile(filename, data) save data (of any sort) as a pickle file
fromFile(filename) load data (of any sort) from a pickle file
mergeFolder(src, dst[, pattern]) Merge a folder into another.

From psychopy.tools.colorspacetools

dkl2rgb
dklCart2rgb
rgb2dklCart
hsv2rgb
lms2rgb
rgb2lms
dkl2rgb

From psychopy.tools.coordinatetools

cart2pol
cart2sph
pol2cart
sph2cart

From psychopy.tools.monitorunittools

convertToPix
cm2pix
cm2deg
deg2cm
deg2pix
pix2cm
pix2deg

From psychopy.tools.imagetools

array2image
image2array
makeImageAuto

From psychopy.tools.plottools

plotFrameIntervals(intervals) Plot a histogram of the frame intervals.

From psychopy.tools.typetools

float_uint8
uint8_float
float_uint16

From psychopy.tools.unittools

radians
degrees

psychopy.monitors - for those that don’t like Monitor Center

Most users won’t need to use the code here. In general the Monitor Centre interface is sufficient and monitors setup that way can be passed as strings to Window s. If there is some aspect of the normal calibration that you wish to override. eg:

from psychopy import visual, monitors
mon = monitors.Monitor('SonyG55')#fetch the most recent calib for this monitor
mon.setDistance(114)#further away than normal?
win = visual.Window(size=[1024,768], monitor=mon)

You might also want to fetch the Photometer class for conducting your own calibrations

Monitor


GammaCalculator


getAllMonitors()

findPR650()

getLumSeriesPR650()

getRGBspectra()

gammaFun()

gammaInvFun()

makeDKL2RGB()

makeLMS2RGB()

psychopy.parallel - functions for interacting with the parallel port

This module provides read/write access to the parallel port for Linux or Windows.

The Parallel class described below will attempt to load whichever parallel port driver is first found on your system and should suffice in most instances. If you need to use a specific driver then, instead of using ParallelPort shown below you can use one of the following as drop-in replacemnts, forcing the use of a specific driver:

  • psychopy.parallel.PParallelInpOut32
  • psychopy.parallel.PParallelDLPortIO
  • psychopy.parallel.PParallelLinux

Either way, each instance of the class can provide access to a different parallel port.

There is also a legacy API which consists of the routines which are directly in this module. That API assumes you only ever want to use a single parallel port at once.

psychopy.parallel.ParallelPort

alias of PParallelLinux

Legacy functions

We would strongly recommend you use the class above instead: these are provided for backwards compatibility only.

parallel.setPortAddress(address=888)

Set the memory address or device node for your parallel port of your parallel port, to be used in subsequent commands

common port addresses:

LPT1 = 0x0378 or 0x03BC
LPT2 = 0x0278 or 0x0378
LPT3 = 0x0278
or for Linux::
/dev/parport0

This routine will attempt to find a usable driver depending on your platform

parallel.setData(data)

Set the data to be presented on the parallel port (one ubyte). Alternatively you can set the value of each pin (data pins are pins 2-9 inclusive) using setPin()

examples:

parallel.setData(0) #sets all pins low
parallel.setData(255) #sets all pins high
parallel.setData(2) #sets just pin 3 high (remember that pin2=bit0)
parallel.setData(3) #sets just pins 2 and 3 high

you can also convert base 2 to int v easily in python:

parallel.setData( int("00000011",2) )#pins 2 and 3 high
parallel.setData( int("00000101",2) )#pins 2 and 4 high
parallel.setPin(pinNumber, state)

Set a desired pin to be high(1) or low(0).

Only pins 2-9 (incl) are normally used for data output:

parallel.setPin(3, 1)#sets pin 3 high
parallel.setPin(3, 0)#sets pin 3 low
parallel.readPin(pinNumber)

Determine whether a desired (input) pin is high(1) or low(0).

Pins 2-13 and 15 are currently read here

psychopy.serial - functions for interacting with the serial port

PsychoPy is compatible with Chris Liechti’s pyserial package. You can use it like this:

import serial
ser = serial.Serial(0, 19200, timeout=1)  # open first serial port
#ser = serial.Serial('/dev/ttyS1', 19200, timeout=1)#or something like this for Mac/Linux machines
ser.write('someCommand')
line = ser.readline()   # read a '\n' terminated line
ser.close()

Ports are fully configurable with all the options you would expect of RS232 communications. See http://pyserial.sourceforge.net for further details and documentation.

pyserial is packaged in the Standalone (Windows and Mac distributions), for manual installations you should install this yourself.

psychopy.sound - play various forms of sound

Sound

PsychoPy currently supports a choice of two sound libraries: pyo, or pygame. Select which will be used via the audioLib preference. sound.Sound() will then refer to either SoundPyo or SoundPygame. This can be set on a per-experiment basis by importing preferences, and setting the audioLib preference to use.

It is important to use sound.Sound() in order for proper initialization of the relevant sound library. Do not use sound.SoundPyo or sound.SoundPygame directly. Because they offer slightly different features, the differences between pyo and pygame sounds are described here. Pygame sound is more thoroughly tested, whereas pyo offers lower latency and more features.

psychopy.tools - miscellaneous tools

Container for all miscellaneous functions and classes

psychopy.tools.colorspacetools

dkl2rgb
dklCart2rgb
rgb2dklCart
hsv2rgb
lms2rgb
rgb2lms
dkl2rgb
Function details

psychopy.tools.coordinatetools

cart2pol
cart2sph
pol2cart
sph2cart
Function details

psychopy.tools.filetools

Functions and classes related to file and directory handling

psychopy.tools.filetools.toFile(filename, data)

save data (of any sort) as a pickle file

simple wrapper of the cPickle module in core python

psychopy.tools.filetools.fromFile(filename)

load data (of any sort) from a pickle file

simple wrapper of the cPickle module in core python

psychopy.tools.filetools.mergeFolder(src, dst, pattern=None)

Merge a folder into another.

Existing files in dst folder with the same name will be overwritten. Non-existent files/folders will be created.

psychopy.tools.filetools.openOutputFile(fileName, append=False, delim=None, fileCollisionMethod='rename', encoding='utf-8')

Open an output file (or standard output) for writing.

Parameters:
fileName : string
The desired output file name.
append : bool, optional
If True, append data to an existing file; otherwise, overwrite it with new data. Defaults to True, i.e. appending.
delim : string, optional
The delimiting character(s) between values. For a CSV file, this would be a comma. For a TSV file, it would be `` . Defaults to ``None.
fileCollisionMethod : string, optional
How to handle filename collisions. This is ignored if append is set to True. Defaults to rename.
encoding : string, optional
The encoding to use when writing the file. Defaults to 'utf-8'.
Returns:
f : file
A writable file handle.
Notes:

If no known filename extension is given, and the delimiter is a comma, the extension .csv will be chosen automatically. If the extension is unknown and the delimiter is a tab, the extension will be .tsv. .txt will be chosen otherwise.

psychopy.tools.filetools.genDelimiter(fileName)

Return a delimiter based on a filename.

Parameters:
fileName : string
The output file name.
Returns:
delim : string
A delimiter picked based on the supplied filename. This will be , if the filename extension is .csv, and a tabulator character otherwise.

psychopy.tools.imagetools

array2image
image2array
makeImageAuto
Function details

psychopy.tools.monitorunittools

convertToPix
cm2deg
cm2pix
deg2cm
deg2pix
pix2cm
pix2deg
Function details

psychopy.tools.plottools

Functions and classes related to plotting

psychopy.tools.plottools.plotFrameIntervals(intervals)

Plot a histogram of the frame intervals.

Where intervals is either a filename to a file, saved by Window.saveFrameIntervals or simply a list (or array) of frame intervals

psychopy.tools.typetools

psychopy.tools.unittools

psychopy.web - Web methods

Test for access

psychopy.web.haveInternetAccess(forceCheck=False)

Detect active internet connection or fail quickly.

If forceCheck is False, will rely on a cached value if possible.

psychopy.web.requireInternetAccess(forceCheck=False)

Checks for access to the internet, raise error if no access.

Upload a file over http

psychopy.web.upload(selector, filename, basicAuth=None, host=None, https=False, log=True)

Upload a local file over the internet to a configured http server.

This method handshakes with a php script on a remote server to transfer a local file to another machine via http (using POST).

Returns “success” plus a sha256 digest of the file on the server and a byte count. If the upload was not successful, an error code is returned (eg, “too_large” if the file size exceeds the limit specified server-side in up.php, or “no_file” if there was no POST attachment).

Note

The server that receives the files needs to be configured before uploading will work. php files and notes for a sys-admin are included in psychopy/contrib/http/. In particular, the php script up.php needs to be copied to the server’s web-space, with appropriate permissions and directories, including apache basic auth and https (if desired). The maximum size for an upload can be configured within up.php

A configured test-server is available; see the Coder demo for details (upload size is limited to ~1500 characters for the demo).

Parameters:

selector : (required, string)

a standard URL of the form http://host/path/to/up.php, e.g., http://upload.psychopy.org/test/up.php

Note

Limited https support is provided (see below).

filename : (required, string)

the path to the local file to be transferred. The file can be any format: text, utf-8, binary. All files are hex encoded while in transit (increasing the effective file size).

Note

Encryption (beta) is available as a separate step. That is, first encrypt() the file, then upload() the encrypted file in the same way that you would any other file.

basicAuth : (optional)
apache ‘user:password’ string for basic authentication. If a basicAuth value is supplied, it will be sent as the auth credentials (in cleartext); using https will encrypt the credentials.
host : (optional)
The default process is to extract host information from the selector. The host option allows you to specify a host explicitly (i.e., if it differs from the selector).
https : (optional)

If the remote server is configured to use https, passing the parameter https=True will encrypt the transmission including all data and basicAuth credentials. It is approximately as secure as using a self-signed X.509 certificate.

An important caveat is that the authenticity of the certificate returned from the server is not checked, and so the certificate could potentially be spoofed (see the warning under HTTPSConnection http://docs.python.org/library/httplib.html). Overall, using https can still be much more secure than not using it. The encryption is good, but that of itself does not eliminate all risk. Importantly, it is not as secure as one might expect, given that all major web browsers do check certificate authenticity. The idea behind this parameter is to require people to explicitly indicate that they want to proceed anyway, in effect saying “I know what I am doing and accept the risks (of using un-verified certificates)”.

Example:

See Coder demo / misc / http_upload.py

Author: Jeremy R. Gray, 2012

Proxy set-up and testing

psychopy.web.setupProxy(log=True)

Set up the urllib proxy if possible.

The function will use the following methods in order to try and determine proxies:
  1. standard urllib2.urlopen (which will use any statically-defined http-proxy settings)
  2. previous stored proxy address (in prefs)
  3. proxy.pac files if these have been added to system settings
  4. auto-detect proxy settings (WPAD technology)
Returns:True (success) or False (failure)

Indices and tables

Changelog

Note

Version numbers

In general, when a new feature is added the second number is incremented (e.g. 1.00.05 -> 1.01.00). Those releases might break previous code you’ve written because new features often need slight changes to other things. Changes to the final digit (1.00.05 -> 1.00.06) indicate a bug-fixing release or very minor new features that shouldn’t require code changes from the user.

Changes in blue typically indicate things that alter the PsychoPy behaviour in a way that could break compatibility. Be especially wary of those!

PsychoPy 1.82

PsychoPy 1.82.01

Released Feb 2015

  • FIXED: problem with MovieStim2 showing black box instead of movie on certain systems
  • FIXED: problem with Tobii eye tracker not closing calibration window (Sol)
  • FIXED: better timing for non-slip routines that follow dynamic routines (Jeremy) #822
  • FIXED: problem with stimuli (e.g. shapes) not appearing if a texture had just been created and not yet drawn
  • FIXED: pygame sound engine complained about “global variable loops not defined”
  • ENHANCED: Filename collision handling for ExperimentHandler (Richard Höchenberger)
  • CHANGED: for text data outputs that give delim=’t’ the file extension ‘.tsv’ is added instead of ‘.dlm’ (Richard Höchenberger)

PsychoPy 1.82.00

Released Jan 2015

  • ENHANCED: slightly faster rendering of movies for high-rate HD stimuli

  • CHANGED: pandas is now a strict requirement for the psychopy.data module

  • FIXED: Builder sounds from file no longer loop indefinitely

  • FIXED: Builder: microphone recordings are explicitly stopped at the end of every trial

  • FIXED: Static Components could become hidden by having unknown durations and then couldn’t be changed. Now they are always shown even when times are unknown (Jeremy)

  • ADDED: improved support for Cambridge Research Systems Display++ and Bits# devices:
    • Color++ and Mono++ modes now supported using shaders
    • fixed some bugs with search for identityLUT in Display++
  • ADDED: Psi adaptive staircase method (thanks Joseph Glavan for writing this)

  • ADDED: bidi and xlwt packages to the Standalone distribution

  • ADDED: support for Mouse.setPos() under pyglet back end (Jeremy)

  • ADDED: support for PST response box (Richard Höchenberger)

  • FIXED: extraInfo was not being saved in wide-text format

  • FIXED: Builder was not respecting order for drawing polygon - it was always drawn first

  • ADDED: Builder now supports ‘degFlat and ‘degFlatPos’ units and documentation has been added for these

PsychoPy 1.81

PsychoPy 1.81.03

Released Dec 2014

  • ADDED: Sounds in Builder can now have a duration set by a variable (changing each repeat). The work on this may cause some systems to have a periodic ‘tick’ in the sound if they last longer than 10s (probably dependent on sound card and driver)
  • IMPROVED: RatingScale will always display a custom description (‘scale’) if provided by the user
  • ADDED: Monitor Center can now calibrate non-primary monitors
  • FIXED: components in Builder can now be ‘stopped’ at the same time as they are started and never show up (previously at least one frame was always required)
  • FIXED: several issues with Bits++ causing a rendering glitch and not being able to calibrate from Monitor Centre
  • FIXED: choice selection boxes stopped working in monitor centre (caused by hardware.crs.bits importing pyglet.gl)
  • FIXED: Bits# can be set to do gamma correction in the PsychoPy LUT (‘software’) rather than using the on-board gamma table file (‘hardware’)
  • FIXED: bug with monitor calib files not returning their linearization method correctly
  • ADDED: psychopy.qtgui as alternative to gui which doesn’t duffer from problem with choice boxes and pyglet clashing (thanks Sol)
  • FIXED: data files now correctly include the originPath (the path to the script that created them). Thanks Alex Holcombe for the fix

PsychoPy 1.81.02

Released Oct 2014

  • FIXED: bug with gamma not being set from the Monitor file
  • FIXED: MovieStim2 warnings about dropped frames were crippling the output window
  • FIXED: new issue (in 1.81.01) with several drop-down menus in Builder not allowing to select that option

PsychoPy 1.81.01

Released Oct 2014

  • FIXED: bug with rendering of Movies from Builder (autoDraw() not working)
  • ADDED: option to use new movie backend from Builder (there is now an option to select opencv or avbin for movie rendering)
  • FIXED: MovieStim2 couldn’t load frames fast enough it ran slow (should drop frames but stay synchronised). (Sol)
  • FIXED: fix spurious warnings about GratingStim.__del__
  • FIXED: pyo audio crashed on windows if no mic/input was found (Sogo Hiroyuki)
  • ADDED: serial port device in iohub (Sol)

PsychoPy 1.81.00

Released Sept 2014

  • IMPROVED: cross-version compatibility:
    • In Builder experiments from ‘future’ versions can be opened and unknown objects will be ignored (but kept)
    • In Code you can now do `import psychopy; psychopy.useVersion(‘X.XX.XX’) to switch to any version greater than 1.76.00 (including versions not installed and future versions). This only affects the lib, not the application. (Thanks Erik Kastman for most of the work on this)
  • IMPROVED: better unit tests for visual stimuli to prevent future bugs

  • FIXED: MovieStim was right-left flipping movies and this has been corrected. If you had been working around that by setting flipVertical=True then you’ll need to undo that correction

  • IMPROVED: Can now select a subset of conditions in Builder loops and in data.importConditions() function (thanks Mike MacAskill for help)

  • IMPROVED: In Builder, loops that don’t reflect trials (e.g. stimuli within a trial or blocks of trials) can be flagged as such, resulting in neater data files

  • ADDED: support for additional hardware:
    • basic support for interacting with BlackBoxToolkit v2 psychopy.hardware.bbtk

    • added basic support for CRS Bits# in psychopy.hardware.crs. New way to interface with Bits++ as well, using a class rather than a Window argument. See demo in demos>hardware

    • labjack digital outputs can be used as a Parallel Port Component in Builder

    • the screen rendering can now include a warping step to simulate spherical, cylindrical or custom warping (Jay Borseth)

    • the screen now supports ‘frame packing’ whereby sequential frames can be packed into one, as the red, green and blue channels for monochrome high-rate projectors (Jay Borseth)

    • ioHub eye tracker interface for GazePoint GP3 (Martin Guest)

    • ioHub Serial device:
      • Support for simple fixed width or marker delimited serial rx stream -> device event parsing.
      • Demo created showing usage with PST Response box added (Richard Höchenberger)
    • ioHub ioSync device:
      • Use Teensy 3.0 / 3.1 MCU. Connect via USB 2.0.
      • 8 / 8 digital inputs / outputs
      • 8 analog inputs (~12 - 13 bit effective resolution)
      • 1000 Hz sampling rate for analog and digital inputs.
      • Keyboard Host support (useful for testing keyboard delay variability from software alone)
  • IMPROVED packaging:
    • can now install on OSX using miniconda/anaconda distribution (Erik Kastman)
    • pyopencv (cv2) added to Standalone as an alternative to avbin
    • PySoundCard and PySoundFile added to Standalone
    • application is now compatible with wxPython 2.8, 2.9 and 3.0
  • IMPROVED: stimulus attributes:
    • Nearly all stimulus attributes now support new syntax, e.g. stim.pos = [0,0] as well as the previous stim.setPos([0,0]). All docs are update to reflect this change.
    • All numeric stimulus attributes now support operations. Use e.g. stim.pos += [0,0.5]. Read more in Operations.
    • Many more stimulus attributes can now be set after initialization. They have the same name as the init parameters. E.g. stim.win = mySecondWindow changes which Window the stimulus is drawn to
  • IMPROVED: logging
    • CHANGED: log=None and autoLog=None inherits from parents, with visual.Window at the top of the hierarchy. None is now default for all stimuli and setter methods.
    • FIXED: removed unneccessary (e.g. duplicate) logging.
    • IMPROVED: unnamed stimuli are now given a default name in the logs for easier identification, e.g. “unnamed ShapeStim”.
  • IMPROVED: you can now specify the standard deviation (default=3) for gaussian mask in various stimuli by setting e.g. maskParams={‘sd’:5} during init or after init.

  • ADDED: language localization (Builder and Coder)
    • Can now display the app menus, tooltips, and so on in a language other than US English (selectable via prefs -> app -> locale)
    • Almost all displayed text can be translated (Jeremy Gray, Hiroyuki Sogo)
    • A Japanese translation is available (Hiroyuki Sogo)
    • Other translations will be easy to add; see online developer notes on using Poedit
  • FIXED: several other minor bugs (that would have given exceptions if encountered). Thanks particularly to Philip Wiesemann for finding several of these

  • FIXED: machines that didn’t support shaders or framebuffer objects were raising an error on win.flip() if the useFBO argument was not manually set to False. Machines that don’t support the new rendering methods are now handled more gracefully

PsychoPy 1.80

PsychoPy 1.80.07

Released Aug 2014

FIXED: bug with timing of keys when using the timestamped argument

PsychoPy 1.80.06

Released June 2014

  • FIXED: problem with using the framebuffer object (nothing was rendered at all)
  • ENH: added support for using a stencil when the framebuffer object is turned on

PsychoPy 1.80.05

Released June 2014

  • IMPROVED: better unit tests for visual stimuli to prevent further regressions of the issues below
  • FIXED: machines that didn’t support shaders or framebuffer objects were raising an error on win.flip() if the useFBO argument was not manually set to False. Machines that don’t support the new rendering methods are now handled more gracefully.
  • FIXED: further fixes to greyscale coloring (some images were not correctly detected as greyscale by PIL so tests weren’t working)
  • FIXED: machines that didn’t support shaders or framebuffer objects were raising an error on win.flip() if the useFBO argument was not manually set to False. Machines that don’t support the new rendering methods are now handled more gracefully
  • FIXED: named colors were not interpreted correctly by the visual.Window (but worked fine for stimuli)
  • FIXED: the error message about TextBox/FontManager not working doesn’t show up any more
  • FIXED: reinstated the requirement that wx is version 2.8.x only until we get time to check 3.0 compatibility more deeply

PsychoPy 1.80.04

Released April 2014

  • FIXED: buglets in logging. Logging wasn’t encoding unicode correctly for console targets (but file targets were OK) and some duplicate messages were occurring for stimulus autologs
  • FIXED: buglet with GratingStim/PatchStim when texture was not a square power of two (was crashing due to incorrect global variable)
  • FIXED: ElementArrayStim was not updating its position using .setFieldPos()

PsychoPy 1.80.03

Released April 2014

  • FIXED: Shader code was ignoring opacity setting for ImageStim
  • FIXED: Mouse clock was not the same as PsychoPy’s general events clock (so out of sync) (Sol & Jeremy)

PsychoPy 1.80.02

Released April 2014

  • FIXED: ImageStim did not use its mask on some machines (nVidia and ATI?) or did not render at all on others (intel graphics?)
  • CHANGED: Sound object now checks if the sound is a note name before checking for file names (only affects cases where the file name was something like A.wav)
  • ADDED: Aperture now supports contains() and overlaps() methods
  • ADDED: Image/Grating masks can now also be ‘cross’ (Suddha Sourav)
  • FIXED: Unicode problem for microphone on non-English installs of win32
  • FIXED: StairHandler first reversal now changes step size correctly and added option not to use the initial 1-up,1-down regime (Jon maintains that you should though!) (thanks Nathanael Larigaldie)
  • FIXED: emulator LaunchScan uses new RatingScale syntax

PsychoPy 1.80.01

Released Mar 2014

  • FIXED: buglet with movie glPopAttrib() on Intel gfx cards (thanks Bryan Cort)
  • FIXED: problem trying to use FrameBufferObject (FBO) on Intel GMA graphics cards
  • FIXED: problem with ImageStim not respecting setColor() and setContrast()
  • FIXED: some stimuli were failing to switch to a second window when requested
  • FIXED: some rendering glitches with ShapeStim caused by interpolation settings (thanks to Soyogu Matsushita for finding this fix)
  • FIXED: automated import of gamma for known monitors, which was failing on some monitor calibration files
  • FIXED: a single-line conditions file is now imported correctly by Builder (Jeremy Gray)
  • IMPROVED: a Routine not included in a loop now saves its data to a default ‘loop’ (Jeremy Gray)
  • IMPROVED: Coder checks for consistency of end-of-line options (thanks Wilbert van Ham)

PsychoPy 1.80.00

Released Mar 2014

  • Improvements to user interface:
    • the glitch that prevented scrolling the Routine view is gone (win32)
    • dialog boxes in the Builder now have tabs for categories of controls
    • Code Components have much more space for each piece of code (again due to tabs)
  • ADDED: In Builder you can now customise the data filename/path in the Experiment Settings. Any variables in the expInfo dialog box can be used to create this path. See dataFileName for further info

  • ADDED: support for advanced rendering modes. Can now ‘add’ rather than average when using transparency. This is better for visual compound stimuli like plaids, and essential for colored anaglyph stimuli where the resulting image needs to be the sum of the left and right eye images.

  • ADDED: new visual unit options: ‘degFlatPos’ and ‘degFlat’ provide more accurate conversions from degrees to pixels for drawing stimuli (although they’re more accurate, accounting for the flat screen, they may look strange because 1 degree gets larger with greater eccentricity on a flat screen). The previous unit ‘deg’ still exists and remains default as, for many studies, these are expected

  • ADDED: wider support for the functions contains and overlaps. Most stimuli now have these methods. Also they can now be used irrespective of whether the stimulus and other object have the same units (they used only to work for units of pix)

  • ADDED: support for other shapes in the Aperture stimulus (and its Builder Component). You can either specify the number of vertices nVert and a size to get a regular polygon aperture, or you can provide a set of arbitrary vertices as your shape argument

  • CHANGED: Size of ‘square’ or ‘triangle’ apertures used to represent the radius of the circle on which their vertices lay. It is now a height/width as you would more likely expect. This means aperture code in scripts may need rewriting to be smaller.

  • IMPROVED: stimulus duration is now more precise when using duration (s) or time (s) although using nFrames option is still advised for brief stimuli

  • IMPROVED: there are now fewer irrelevant lines in the log file as stimuli are initially created

  • IMPROVED: Staircase loops in Builder now initialise just before the staircase is run, rather than at the start of the experiment. This means they can be controlled by an outer loop and, effectively, restarted

  • FIXED: ElementArrayStim can take Nx3 or 1x3 values for colors again

  • FIXED: variable names in Builder are now case-sensitive again (they were being forced to lower case when importing csv files)

  • FIXED: incorrect equation for the Cumulative Normal fitting function

  • FIXED: If your variable had a new line character in it this was causing a new line to be started in the csv data file. These are now handled correctly

  • ADDED: RatingScale markerStart position can be arbitrary, e.g., can start between items or beyond the end of scale

  • ADDED: RatingScale tickHeight can be used to control the height of tickMarks, including no tick marks (tickHeight=0)

  • ADDED: RatingScale marker=’hover’ is similar to HTML-style hovering over clickable elements

psychopy.visual.RatingScale Changes :

  • CHANGED: Builder: remove option: choiceLabelsAboveLine; change lowAnchorText, highAnchorText -> labels
  • CHANGED: skipping a rating now adds None as the final element in the history
  • CHANGED: the default minTime is shorter, now 0.4s
  • CHANGED: more info in the log when creating a rating scale object
  • CHANGED: removed showAnchors: now use labels=None (instead of showAnchors=False)
  • CHANGED: removed lowAnchorText & highAnchorText: now use labels=[‘leftAnchor’, ‘rightAnchor’] or with optional 3rd midpoint label
  • CHANGED: renamed several parameters: stretchHoriz -> stretch, textSizeFactor -> textSize, ticksAboveLine -> tickHeight, displaySizeFactor -> size, markerStyle -> marker, customMarker -> marker
  • CHANGED: removed showScale: now use scale=None (instead of showScale=False)
  • CHANGED: removed allowSkip: now use skipKeys=None (instead of allowSkip=False)
  • CHANGED: removed escapeKeys; no longer supported but it’s easy to implement (as now done in the coder demo)

PsychoPy 1.79

PsychoPy 1.79.01

Released Dec 2013

  • FIXED: startup crash in 1.79.00
  • FIXED: long-standing memory leak in MovieStim
  • FIXED: fixed problem with MovieStim not displaying the image but playing the audio
  • ADDED: volume attribute to MovieStim (Frank Papenmeier)
  • FIXED: experiments were crashing if first line of a conditions file contained a float but the rest were integers
  • FIXED: QuestHandler.addResponse() should not try to replace existing intensity on first trial (Richard Höchenberger)
  • FIXED: Window’s viewPos and viewScale attributes could not be changed
  • FIXED: Builder code generation for Cedrus Box when user provided a limited set of available buttons
  • FIXED: multiple issues causing fatal errors when setting stimulus parameters (Pieter Moors and Damien Mannion)
  • FIXED: Builder experiments would crash under certain conditions when there was no ‘participant’ in the info dialog box (Philipp Wiesemann)
  • FIXED: bug toggling readme file window in Builder (Philipp Wiesemann)
  • FIXED: further fix to the Coder raising excessive ‘this file has changed’ warnings
  • FIXED: Component names now update on the Routine panel after being changed in a dialog (Philipp Wiesemann)
  • FIXED: bug importing conditions if the first row of numbers was the only float. (importFromConditions now uses numpy instead of matplotlib)
  • FIXED: further fix to the extra “file close” queries during shut-down

PsychoPy 1.79.00

Released Dec 2013

  • ADDED: attributes for some stimuli can now be updated using e.g. stim.pos = newPos rather than using stim.setPos(newPos) to make things more like standard Python (thanks Jonas Lindeløv). This version also involved some major restructuring behind the scenes that should not be visible to users (thanks Todd Jennings)

  • ADDED: Builder Components for
    • ioLab Systems button-box; refactor PsychoPy’s ioLabs code (Jeremy)
    • Cedrus button-box (tested on RB730)
    • parallel port output component
  • ADDED: option for sounds to loop

  • ADDED: volume argument for MovieStim so that sound can be muted (Frank Papenmeier)

  • ADDED: window now prevents system from sleep/screensaver on windows and OS X

  • ADDED: builder demo for mental rotation task

  • ADDED: Alternative Text stimulus, psychopy.visual.TextBox (Sol Simpson)
    • Two demos in psychopy.coder.visual.textbox
    • Requires: freetype lib (included in Standalone)
    • Advantages: Very fast update following text change; very precise character placement.
    • Disadvantages: Supports monospace fonts only.
    • IMPORTANT: TextBox is still being finalized and completed; expect to find (and please report) issues. API changes guaranteed.
  • FIXED: misaligned responses in csv output for QuestHandler (Zhili Zheng)

  • FIXED: bug when using ElementArrayStim with numpy 1.7.1. Most elements were receiving SF=0

  • FIXED: ‘semi-automatic’ calibration (thanks Flip Phillips)

  • FIXED: shut-down issues. Builder now remembers its last experiment and you don’t get multiple messages about the scripts that have changed

  • FIXED: bugs with MultiStairHandler that were making it unusable (in code and Builder)

  • FIXED: lists of key presses can now be considered correct (Ian Hussey)

  • FIXED: certain further cases of bitmap images appearing desaturated

  • FIXED: mono sounds now duplicate to both channels correctly

  • changes to Standalone packages (require fetching the installer):
    • pyFileSec for uploading files to server using encryption (this is Jeremy’s module)
    • pandas on win32 is now v1.3 (was already this version on OS X)
    • pyxid now includes Jared’s upstream bug-fix
  • FIXED: many user interface tweaks, documentation and help string corrections (Philip Wiesemann)

  • FIXED: PsychoPy Coder view now closes the iohub process when the experiment script is terminated using ‘Stop’. (Sol Simpson)

  • FIXED: Builder use of single staircase loops now respects the min/max values

  • CHANGED: data curve fitting functions are now using scipy.optimise.curve_fit and should hopefully be more robust to local minima(?)

psychopy.iohub Changes :

  • ADDED: Initial release of the new Touch device:
    • currently supporting Elo brand Touch Screens.
    • any Elo model supporting the SmartSet protocol should work (Elo 2700 model used for testing to date)
    • Touch Events (TouchPress, TouchRelease, TouchMovement) are provided in a separate event stream
    • Touch and Mouse device events are independent of each other, so both devices can be used in parallel without interference
    • Touch screen calibration routine provided; calibration state can be saved to device hardware for persistence
    • See the demos.coder.iohub_extended Touch script for example of calibration graphics front end.
  • ADDED: Keyboard and Mouse events can be restricted to those events targeted at a PsychoPy Window. Currently supported on Windows and Linux only.

  • NEW: PsychoPy TrialHandler can now be used to feed experiment condition variables to the ioDataStore.

  • NEW: Device configuration file can now be specified to the launchHubServer() function when starting the ioHub Process.

  • NEW: Simple examples of how to use iohub within a Builder project using a Custom Code Component.

  • FIXED: Analog Input Event delay calculation error that was causing incorrect time correction to be applied to this event type.

  • NEW: LabJack AnalogInput interface now handles dropped samples and sampling rates that cause multichannel samples to be split between USB packets.

  • FIXED: Gaze position calculation fix for the SMI eye tracker interface during binocular tracking.

  • NEW: Enhanced Tobii eye tracker setup and calibration graphics:
    • Head position within the 3D eye tracking head box can be visualized before and after calibration
    • Animated fixation target support added during calibration routine
  • ADDED: Following EXPERIMENTAL stage implementation (Use at Own Risk):
    • ioDataStore -> Pandas Data Frame based post processing API:
      • Creates a set of Pandas Data Frames for device events, experiment messages, and experiment condition variables.

      • Filter, Group, Join data using the Pandas API.

      • Access event information with associated condition variable states.

      • Define Interest Periods (IP):
        • filter event temporally based on start and end time criteria.
        • define an IP’s start and end time criteria using experiment message events, or experiment condition variable columns.
        • re-occurring IP’s supported.
        • overlapping IP’s supported.
      • Define Regions of Interest (ROI),
        • filter Mouse, Eye Tracker, and Touch device events based on screen location.
        • circle, ellipse, rectangle, and general polygon ROI shapes supported. (ROI functionality is dependent on the shapely python package)
      • IMPORTANT: The ioDataStore->DataFrame API is still being designed and developed. Expect to find issues. API changes guaranteed.

PsychoPy 1.78

PsychoPy 1.78.01

Released Aug 2013

  • FIXED: Image Components were showing up a pastel versions when no actual image was provided
  • FIXED: MultiStairHandler wasn’t working on Builder, and had insufficient data outputs when using wide-text csv files
  • FIXED: loops couldn’t be deleted from the Flow if their conditions file couldn’t be found (e.g. had been moved)
  • FIXED: setting of color values was not honouring the autolog setting (was always logging)
  • FIXED: gui choice boxes now handle unicode in their options as well as ASCII strings (thanks Anne Peschel)
  • FIXED: Scaling bug for SMI eye-tracker in binocular mode (thanks Sol)
  • FIXED: Builder Code Components that were showing up in unreadable, single-line boxes
  • IMPROVED: All Builder Dialogs now appear close to the top of the screen (so they don’t shoot off the bottom in most screens)

PsychoPy 1.78.00

Released Aug 2013

  • ADDED: option to preload during Builder scripts using Static Component, which uses StaticPeriod class
  • ADDED: Polygon Component to Builder for drawing regular polygons (including simple lines)
  • ADDED: TrialHander can now fetch previous trials as well as future ones (thanks Mike MacAskill)
  • ADDED: BufferImageStim accepts mask and pos params (thanks Jeremy)
  • ADDED: generated Sounds (not sound files) now use a Hamming window to get rid of sharp onset/offset noises (thanks Jeremy)
  • ADDED: microphone component able to play & identify a marker tone (for vocal RT), compute loudness, compression (Jeremy)
  • ADDED: sound files: lossless compress / uncompress (requires flac executable installed separately) (Jeremy)
  • ADDED: microphone compress() audio recordings; requires flac download (not packaged with PsychoPy)
  • ADDED: new preference flac = system path for flac, e.g. c:/Program Files (x86)/FLAC/flac.exe (not always needed)
  • FIXED: greyscale images were being distorted during display since 1.77.00
  • FIXED: reduced number of queries when closing down and provides filenames of changed files in msg (thanks Piot Iwaniuk)
  • FIXED: movieStim.contains() and .overlaps() can work, requires that the visual.Window has units of pix

PsychoPy 1.77

PsychoPy 1.77.02

released July 2013

  • FIXED: problem with Builder Images appearing grey unless they were ‘constant’. This is a bug that was introduced in 1.77.00 with the faster loading of images.
  • FIXED: having a monitors folder with a unicode character in the path doesn’t break the app (thanks Sebastiaan Mathot)

PsychoPy 1.77.01

released June 2013

  • Standalone package changes:
    • fixed pytables version on Win32 (to be compatible with WinXP)
    • pyo upgraded to 0.6.6 on OSX and Win32
  • FIXED: The recent files list in Builder now contains recent files! (Thanks Piotr Iwaniuk)

  • FIXED: Timing issue with LC Tech eye-tracker in iohub

PsychoPy 1.77.00

released June 2013

  • ADDED: preview of Sol Simpson’s ioHub for faster (asynchronous) polling of hardware including mouse, keyboard, eyetrackers and other devices. See iohub demos for example usage. This provides many advantages over previous event polling:
    • asynchronous process allows constant polling (not tied to refresh rates) in a way that won’t impact the rendering of your stimuli. It even runs on a separate CPU core if possible.
    • provides up/down/duration for key presses
    • provides unicode character (rather than simply key name for keyboard)
    • provides a unified API for eyetracker classes
    • provides async access to the parallel port
    • provides an alternative data output format (using hdf5) particularly useful for high-output streaming data (e.g. eye-trackers)
  • DEPRECATED: opensslwrap will soon be replaced by pyFileSec, a much-improved version of the same package (= file-oriented encryption)

  • IMPROVED: substantially (~40%) faster loading of RGB images from disk (by using byte format rather than float). May also allow storing of more images on graphics card than previously

  • ADDED: AdvancedMicrophone class to add and retrieve a high-frequency tone to indicate the start of recording (e.g., to allow accurate vocal RT estimation), with demo (Jeremy Gray)

  • REFACTORED: parallel port support. Support for Windows via inpout32/inpout64 and Linux via pyparallel added. Existing API maintained for single port usage, but new PParallel classes added to provide more flexibility when dealing with multiple ports. see psychopy.parallel - functions for interacting with the parallel port (Thanks Mark Hymers)

  • ADDED: MovieStim now updates its status attribute to FINISHED, in line with other stimuli

  • CHANGED: microphone default file names include milliseconds (to avoid two files with the same name)

  • ADDED: color-word speech-recognition demo (coder > input > speech_recognition.py)

  • ADDED: in Builder components dialog boxes, text that will be interpreted as code is displayed in monospace font

  • ADDED: remove and warn about trailing whitespace in Builder component values (but not Text fields)

  • ADDED: support for pyglet version 1.2 alpha (but 1.1.4 is still recommended - it appears to render faster)

  • ADDED: more sound.SoundPyo methods (get & set duration, volume, looping)

  • FIXED: event.Mouse() can obtain a default visual.Window(), if one has already been created

  • ADDED: Builder components generate a compile-time warning if a field’s value looks dynamic but its updating is constant (Jeremy Gray)

  • ADDED: better simulated scanner-noise in launchScan (just for fun)

  • ADDED: RatingScale.getHistory() returns intermediate time-stamped ratings; allows “continuous” ratings

  • CHANGED: RatingScale.getRating() no longer returns False prior to an accepted rating (now returns the currently selected value)

PsychoPy 1.76

PsychoPy 1.76.00

The compatibility changes in this release below are likely to affect very few users

  • ADDED: Window.callOnFlip() function to allow arbitrary functions to be called, timed precisely to the point where the frame flip has occurred (see Coder Demos>Timing>callOnFlip)

  • FIXED: a scaling bug in RatingScale descriptions (Giuseppe Pagnoni)

  • ADDED: support for mirror-image text, and mirror-image BufferImageStim (Jeremy Gray)

  • ADDED: support for lower latency sound with the pyo library. For now pygame remains the default but this can be changed by setting the order in preferences>general>audio

  • CHANGED: PsychoPy Standalone is now being built using python 2.7.3 (rather than 2.6). Under OSX psignifit has been removed from this distribution, as have the libraries to create .mov files using Window.saveMovieFrames(). If you need those features then install the 1.75 Standalone and then update to 1.76 using the auto-update system.

  • ADDED: sound objects (either pygame or pyo) now support autologging

  • FIXED: a bug in the generation of the LMS color space conversion matrix. It seems nobody was actually using this for real, but if you were contact Jon for details!

  • CHANGED: various changes to RatingScale (thanks Henrik Singman):
    • CHANGED: choices are now displayed at the tick marks by default (instead of above the line). To restore the old behavior set labels=False. This does not affect experiments created in older versions of the builder.
    • ADDED: check box “choiceLabelsAboveLines” to the RatingScale component of the builder (advanced tab) to still have the choice labels above the line.
    • ADDED: arguments tickMarks and labels to RatingScale class to control where tick marks (for quantitative rating scales) should be placed at the line and how these should be labeled.
    • ADDED: argument ticksAboveLine to RatingScale class. Controls where the tick marks should be plotted (above or below the line).
  • FIXED: problem with unset exp.name (was causing wx.Dialog error “TypeError: String or Unicode type required” on new experiments)

  • CHANGED: exp.name is no longer available from Builder scripts (use exp.getExpName() instead)

  • FIXED: problem with tiling of depth values for ElementArrayStim (thanks Yuri Spitsyn)

  • FIXED: Fix to setContrast for certain visual stimuli (Jonas Lindeløv)

  • FIXED: inability to launch scripts/experiments if the Mac Standalone was in a folder with a space in it

  • FIXED: Aperture Component now honours the ‘units’ (Hiroyuki Sogo)

  • FIXED: stimulus contains/overlaps functions now use stimulus ‘units’ and take stimulus orientation into account (Hiroyuki Sogo) NB if you had code in place to perform these corrections yourself you should now remove it!

  • FIXED: some data outputs were not honouring the ‘matrixOnly’ option (Mike MacAskill)

  • FIXED: when loading a psydat file of an ExperimentHandler the file automatically saved new copies of its csv/excel outputs. This no longer occurs (if loaded using misc.fromFile)

  • ADDED: timestamp option to event.waitKeys() (Jonas Lindeløv)

  • ADDED: a first-run wizard to check the system, report as html (somewhat experimental) (Jeremy Gray)

  • ADDED: a benchmark wizard (Tools menu) to test hardware & software, option to share on psychopy.org (Jeremy Gray)

  • ADDED: info.getRAM() (Jeremy Gray)

  • FIXED: Fall back to primary display if a secondary one is specified but unavailable. (Erik Kastman)

PsychoPy 1.75.01

  • FIXED: Bug with not being able to play sounds of blank (infinite) duration from Builder

PsychoPy 1.75.00

  • CHANGED: New Builder experiments will, by default save a single csv file, a single psydat file and a single log file. Was previously also saving an Excel file (wiht one sheet per loop) and many psydat files (one per loop). This can be changed in settings. Psydat files can still be used to re-output any format of data file.
  • IMPROVED: Experiment info dialog box easier to control now from experiment settings (user doesn’t need to write a dictionary by hand any more)
  • IMPROVED: Components in the Builder are now arranged in categories, including a special ‘Favorites’ category
  • IMPROVED: Code Components now support full syntax highlighting and code folding (but still aren’t quite big enough!)
  • ADDED: Builder undo/redo now gives info about what is going to be un/redone
  • ADDED: Window now supports a stereo flag to provide support for quad-buffers (advanced graphics cards only)
  • FIXED: bug with copying/pasting Routines that was breaking Flow in certain situations and corrupting the experiment file
  • FIXED: fatal typo in QuestHandler code (Gary Lupyan)
  • FIXED: data outputs for multiple key/mouse presses
  • ADDED: Microphone now supports stop to abort recording early (Jeremy Gray)
  • ADDED: beginning of error reporting when generating Builder experiments (thanks Piotr Iwaniuk)
  • FIXED: csv files now generated from Builder as expected not dlm files (tab-delimited)

PsychoPy 1.74

PsychoPy 1.74.04

  • IMPROVED: larger Code Component boxes (and fixed bug with being only one line on linux)
  • FIXED: Builder code syntax error when using Mouse set state ‘every frame’
  • FIXED: Builder was erroneously using ‘estimated duration’ for constraining non-slip timing
  • FIXED: Builder couldn’t open Experiment Settings if the expected screen number didn’t exist on this system

PsychoPy 1.74.03

(released Aug 2012)

  • FIXED: the multiline text entry box in the Builder Text Component was broken (thanks Piotr Iwaniuk)
  • IMPROVED: serial (RS232) interface to fORP button box to avoid recording repeated presses (thanks Nate Vack). Does not affect use of fORP box from USB interface.

PsychoPy 1.74.02

(released Aug 2012)

  • FIXED: bug leading to message: IndexError: string index out of range. This was caused by problem saving excel files
  • FIXED: bug leading to message: AttributeError: ImageStim instance has no attribute ‘rgbPedestal’. Was only occurring on non-shaders machines using the new ImageStim.
  • FIXED: problem loading old ExperimentHandlers that contained MultiStairHandlers
  • FIXED: Builder Text Components gave an error if letter height was a variable
  • ADDED: Window.flip() now returns the timestamp for the flip if possible (thanks Sol Simpson)
  • ADDED: misc.sph2cart (Becky Sharman)
  • ADDED: warning when user presents SimpleImageStim that seems to extend beyond screen (James McMurray)

PsychoPy 1.74.01

(released July 2012)

  • FIXED: the pyo package is now included in the windows Standalone distribution (making audio input available as intended)
  • FIXED: error saving excel data from numpy.int formats (Erik Kastman)
  • FIXED: error at end of automated gamma calibration (which was causing a crash of the calibration script)
  • FIXED: misc.getDateStr() returns numeric date if there’s an error with unicode encoding (Jeremy)
  • FIXED: added partial support for non-ASCII keyboards (Sebastiaan Mathot)

PsychoPy 1.74.00

(released July 2012)

Highlights (and compatibility changes):

  • CHANGED: Builder experiments saved from this version will NOT open in older versions
  • ADDED: ‘non-slip’ timing methods to the Builder interface (improved timing for imaging experiments) See Non-slip timing for imaging for further info
  • ADDED: Long-wide data file outputs, which are now the default for all new Builder experiments. See Long-wide data file outputs
  • CHANGED: The psydat output files from Builder have also changed. They are now ExperimentHandler objects, which contain all loops in a single file. Previously they were TrialHandlers, which required one file for each loop of the experiment. Analysis scripts will need slight modifications to handle this
  • CHANGED: The summarised excel/csv outputs now have an additional column for the order of the stimulus as presented. This may affect any automated analysis you perform on your spreadsheet outputs
  • RESTRUCTURED: the generation of ‘summarised’ data outputs (text and excel) were also rewritten in this version, so make sure that your data files still contain all the data you were expecting
  • ADDED: basic audio capture (and speech recognition via google!). Builder now has a Microphone Component to record inputs, but does not yet use the speech recognition facility. See psychopy.microphone library, Coder demo “input/say_rgb.py” and Builder demo “voiceCapture”. (Jeremy)
  • ADDED: HSV color space for all stimuli
  • CHANGED: in Builder the default :class:`~psychopy.visual.DotStim has signal dots=’same’ (once a signal dot, always a signal dot).` Only affects new experiments
  • CHANGED: data.FitCumNormal now uses a slightly different equation that has a slightly different equation, which alters the interpretation of the parameters (but not the quality of fit). Parameters from this function before version 1.74 cannot to be compared with new values.
  • CHANGED: pygame is no longer being formally supported/tested although it will probably continue to work for some time.

Additional changes:

  • ADDED: contains() and intersects() methods to visual shape stimuli (including Rect etc) to determine whether a point or array of points is within the present stimulus boundaries
  • FIXED: missing parameter name in conditions file is detected, triggers more informative error message
  • ADDED: fORP: option asKeys to handle button presses as pyglet keyboard events (when using a serial port); faster getUniqueEvents()
  • ADDED: basic file encryption (beta) using RSA + AES-256; see API encryption for usage and caveats
  • ADDED: upload a file to a remote server over http (libs: web.upload) with coder demo, php scripts for server (contrib/http/)
  • ADDED: Builder demo (dualRatingScales): show a stim, get two different ratings side by side [unpack the demos again]
  • ADDED: rating scale options: ‘maxTime’ to time-out, ‘disappear’ to hide after a rating; see new Builder demo
  • FIXED: rating scale bug: skipKeys was not handling ‘tab’ properly (no skip for tab-key, do skip for ‘t’, ‘a’, or ‘b’)
  • ADDED: new locale pref for explicitly setting locale, used in date format and passed to builder scripts (Jeremy, Hiroku Sogo)
  • ADDED: ‘enable escape’ option in experiment settings, default is ‘enabled’
  • ADDED: support for ElementArrayStim to use the same set of color spaces as other stimuli
  • CHANGED: removed python 2.4’s version of sha1 digest from RunTimeInfo
  • CHANGED: removed any need for PyOpenGL (pyglet.gl now used throughout even for pygame windows)
  • FIXED: Builder was ignoring changes to DotStim FieldPos (thanks Mike MacAskill)
  • FIXED: Builder Flow is smarter about Loops and now stops you creating ‘broken’ ones (e.g. Loops around nothing)
  • FIXED: MovieStim used from Builder was not working very well. Sounds continued when it was told to stop and the seek(0.0001) line was causing some file formats not to work from Builder only (those that don’t support seeking)
  • FIXED: Mouse component was not saving clicks in Builder experiments if forceEndOnClick was set to be False
  • FIXED: DotStim.setFieldCoherence was having no effect if noise dots were updating by ‘position’
  • FIXED: TextStim.setColor() was not updating stimulus properly when haveShaders=False
  • FIXED: In Builder, sound duration was not being used in creating new sounds
  • CHANGED: Under linux, although you will be warned if a new version is available, it will not be auto-installed by PsychoPy (that should be done by your package manager)
  • FIXED: csv/dlm data outputs no longer have a trailing delimitter at end of line
  • FIXED: all test suite tests should now pass :-)

PsychoPy 1.73

PsychoPy 1.73.06

(released April 2012)

  • FIXED: xlsx outputs were collapsing raw data from trials with non-response
  • FIXED: monitor gamma grids are now returned as arrays rather than lists (Ariel Rokem)
  • FIXED: bug with Window.setColor being incorrectly scaled for some spaces
  • FIXED: buglet preventing unicode from being used in TrialHandler parameter names (William Hogman) and saving to data files (Becky Sharman)
  • FIXED: StairHandler in Builder now saves the expInfo dictionary (Jeremy)
  • FIXED: can unpickle from either old-style or new-style data files (using psychopy.compatibility.fromFile()) (Erik Kastman)

PsychoPy 1.73.05

(Released March 2012)

  • FIXED: Joystick error when calling getHat() or getHats() (fixed by Gary Lupyan)
  • FIXED: BufferImageStim crashing on some linux boxes (due to bug with checking version of OpenGL) (fixed by Jonas Lindelov)
  • FIXED: fMRI emulator class was providing old-format key events (fixed by Erik Kastman and Jeremy)
  • FIXED: Win.setRecordFrameIntervals(True) was including the time since it was turned off as a frame interval (fixed by Alex Holcombe)
  • FIXED: using forceEndtrial from a mouse component in Builder wasn’t working (thanks Esteban for the heads-up)
  • FIXED: visual.Circle now respects the edges parameter (fixed by Jonas Lindelov)
  • FIXED: having IPython v0.12 should no longer crash psychopy on startup (Jeremy)
  • FIXED: non-ascii month-name (eg Japanese) from %B is now filtered out to avoid crash when compile a psyexp script (Jeremy)
  • ADDED: support for usb->serial devices under linux (William Hogman)
  • ADDED: option to vertically flip a BufferImageStim upon capture (esp for fMRI-related presentation of text) (Jeremy)
  • ADDED: option to play a sound (simple tone) during fMRI launchScan simulation (Jeremy)

PsychoPy 1.73.04

(released Feb 2012)

  • CHANGED: Builder scripts now silently convert division from integers to float where necessary. That means 1/3=0.333 whereas previously 1/3=0. This is done simply by adding the line from __future__ import division at the top of the script, which people using Coder might want to think about too.
  • FIXED: problem with loading .psydat files using misc.fromFile (thanks Becky)
  • FIXED: issue on OSX with updating from 1.70 binaries to 1.73 patch release

PsychoPy 1.73.03

(released Jan 2012)

  • FIXED: problem with loops crashing during save of xlsx/csv files if conditions were empty
  • FIXED: bugs in Builder setting Dots coherence and direction parameters
  • FIXED: problem with strange text and image rendering on some combinations of ATI graphics on Windows machines

PsychoPy 1.73.02

(released Jan 2012)

  • ADDED: loop property to MovieStim for coder only so far (thanks Ariel Rokem)
  • FIXED: buglet requesting import of pyaudio (thanks Britt for noticing and Dan Shub for fixing)
  • FIXED: problem with avbin (win32)
  • FIXED: problem with unicode characters in filenames preventing startup
  • FIXED: bug with ‘fullRandom’ method of TrialHandler missing some trials during data save
  • FIXED: Mouse.clickReset() now resets the click timers
  • FIXED(?): problem with avbin.dll not being found under 64-bit windows

PsychoPy 1.73.00

(released Jan 2012)

  • CHANGED: psychopy.log has moved to psychopy.logging (Alex Holcombe’s suggestion). You’ll now get a deprecation warning for using psychopy.log but it will still work (for the foreseeable future)
  • ADDED: new hardware.joystick module supporting pyglet and pyjame backbends for windows and OSX. Demo in Not working on Linux yet. See demos>input
  • ADDED: support for CRS ColorCAL mkII for gamma calibrations in Monitor Center.
  • ADDED: data.ExpHandler to combine data for multiple separate loops in one study, including output of a single wide csv file. See demos>experimental control>experimentHandler. Support from Builder should now be easy to add
  • ADDED: ability to fix (seed) the pseudorandom order of trials in Builder random/full-random loops
  • ADDED: auto-update (and usage stats) can now detect proxies in proxy.pac files. Also this now runs in a low-priority background thread to prevent any slowing at startup time.
  • FIXED: bug when passing variables to Staircase loops in Builder
  • FIXED: mouse in Builder now ignores button presses that began before the ‘start’ of the mouse
  • FIXED: can now use pygame or pyaudio instead of pygame for sounds, although it still isn’t recommended (thanks Ariel Rokem for patch)

PsychoPy 1.72.00

(rc1 released Nov 2011)

  • CHANGED: gui.Dlg and gui.dlgFromDict can now take a set of choices and will convert to a choice control if this is used (thanks Manuel Ebert)
    • for gui.Dlg the .addField() method now has choices attribute
    • for gui.dlgFromDict if one of the values in the dict is a list it will be interpreted as a set of choices (NB this potentially breaks old code)
    • for info see API docs for psychopy.gui
  • ADDED: improvements to drawing of shapes (thanks Manuel Ebert for all)
    • ShapeStim now has a size parameter that scales the locations of vertices
    • new classes; Rect, Line, Circle, Polygon
  • FIXED: error with DotStim when fieldSize was a tuple and fieldShape was ‘sqr’

  • FIXED: calibration plots in Monitor Center now resize and quit as expected

  • FIXED: conditions files can now have lists of numbers [0,0]

  • FIXED: buglet with flushing mouse events (thanks Sebastiaan Mathot)

  • FIXED: Builder components now draw in order, from top to bottom, so lower items obscure higher ones

  • FIXED: problem with Patch Component when size was set to be dynamic

  • FIXED: problem with Builder loops not being able to change type (e.g. change ‘random’ into ‘staircase’)

  • FIXED: data from TrialHandler can be output with unicode contents (thanks Henrik Singmann)

PsychoPy 1.71

PsychoPy 1.71.01

(released Oct 2011)

  • CHANGED: the number of stimulus-resized and frames-dropped warnings is now limited to 5 (could become a preference setting?)
  • FIXED: Builder now allows images to have size of None (or ‘none’ or just blank) and reverts to using the native size of the image in the file
  • FIXED: occasional glitch with rendering caused by recent removal of depth testing (it was getting turned back on by TextStim.draw())
  • FIXED: opening a builder file from coder window (and vice versa) switches view and opens there
  • FIXED: problem showing the About... item on OS X Builder view
  • FIXED problem with loops not showing up if the conditions file wasn’t found
  • FIXED: runTimeInfo: better handling of cwd and git-related info
  • FIXED: rating scale: single click with multiple rating scales, auto-scale with precision = 1
  • IMPROVED: rendering speed on slightly older nVidia cards (e.g. GeForce 6000/7000 series) under win32/linux. ElementArrays now render at full speed. Other cards/systems should be unchanged.
  • IMPROVED: rating scale: better handling of default description, scale=None more intuitive
  • ADDED: new function getFutureTrial(n=1) to TrialHandler, allowing users to find out what a trial will be without actually going to that trial
  • ADDED: misc.createXYs() to help creating a regular grid of xy values for ElementArrayStim

PsychoPy 1.71.00

(released Sept 2011)

  • CHANGED: Depth testing is now disabled. It was already being recommended that depth was controlled purely by drawing order (not depth settings) but this is now the *only* way to do that

  • CHANGED: The Builder representation of the Components onset/offset is now based on ‘estimatedStart/Stop’ where a value has been given. NB this does not affect the actual onset/offset of Components merely its representation on the timeline.

  • ADDED: Builder loop conditions mini-editor: (right-click in the filename box in a loop dialog)
    • create, edit, and save conditions from within PsychoPy; save & load using pickle format
    • preview .csv or .xlsx conditions files (read-only)
  • ADDED: RatingScale method to allow user to setMarkerPosition()

  • ADDED: Builder dialogs display a ‘$’ to indicate fields that expect code/numeric input

  • ADDED: Text Component now has a wrapWidth parameter to control the bounding box of the text

  • ADDED: Opacity parameter to visual stimulus components in the Builder, so you can now draw plaids etc from the builder

  • FIXED: can edit or delete filename from loop dialog

  • FIXED: bug in RunTimeInfo (no longer assumes that the user has git installed)

  • FIXED: bug in BufferImageStim

  • FIXED: bug in Builder Ratingscale (was always ending routine on response)

  • FIXED: problem with nested loops in Builder. Inner loop was not being repeated. Loops are now only created as they are needed in the code, not at the beginning of the script

  • FIXED: rendering of many stimuli was not working beyond 1000 elements (fixed by removal of depth testing)

  • FIXED: mouse component now using start/duration correctly (broken since 1.70.00)

  • FIXED: when changing the texture (image) of a PatchStim, the stimulus now ‘remembers’ if it had been created with no size/sf set and updates these for the new image (previously the size/sf got set according to the first texture provided)

  • FIXED: putting a number into Builder Sound Component does now produce a sound of that frequency

  • FIXED: added ‘sound’,’misc’,’log’ to the component names that PsychoPy will refuse. Also a slightly more informative warning when the name is already taken

  • FIXED: Opacity parameter was having no effect on TextStim when using shaders

  • FIXED bug with MovieStim not starting at beginning of movie unless a new movie was added each routine

PsychoPy 1.70

PsychoPy 1.70.02

  • FIXED: bug in Builder Ratingscale (was always ending routine on response)
  • FIXED: problem with nested loops in Builder. Inner loop was not being repeated. Loops are now only created as they are needed in the code, not at the beginning of the script
  • FIXED: rendering of many stimuli was not working beyond 1000 stimuli (now limit is 1,000,000)
  • FIXED: mouse component now using start/duration correctly (broken since 1.70.00)
  • FIXED: when changing the texture (image) of a PatchStim, the stimulus now ‘remembers’ if it had been created with no size/sf set and updates these for the new image (previously the size/sf got set according to the first texture provided)
  • CHANGED: Depth testing is now disabled. It was already being recommended that depth was controlled purely by drawing order (not depth settings) but this is now the only way to do that
  • CHANGED: The Builder representation of the Components onset/offset is now based on ‘estimatedStart/Stop’ where a value has been given. NB this does not affect the actual onset/offset of Components merely its representation on the timeline.

PsychoPy 1.70.01

(Released Aug 2011)

  • FIXED: buglet with Builder (1.70.00) importing older files not quite right and corrupting the ‘allowedKeys’ of keyboard component
  • FIXED: buglet with SimpleImageStim. On machines with no shaders some images were being presented strangely
  • FIXED: buglet with PatchStim. After a call to setSize, SF was scaling with the stimulus (for unit types where that shouldn’t happen)

PsychoPy 1.70.00

(Released Aug 2011)

NB This version introduces a number of changes to Builder experiment files that will prevent files from this version being opened by earlier versions of PsychoPy

  • CHANGED use of allowedKeys in Keyboard Component. You used to be able to type ynq to get those keys, but this was confusing when you then needed ‘space’ or ‘left’ etc. Now you must type ‘y’,’n’,’q’, which makes it more obvious how to include ‘space’,’left’,’right’...

  • CHANGED dot algorithm in DotStim. Previously the signalDots=same/different was using the opposite to Scase et al’s terminology, now they match. Also the default method for noiseDots was ‘position’ and this has been changed to ‘direction’. The documentation explaining the algorithms has been clarified. (see Dots (RDK) Component)

  • CHANGED MovieStim.playing property to be called MovisStim.status (in keeping with other stimuli)

  • CHANGED names:

    • data.importTrialTypes is now data.importConditions
    • forceEndTrial in Keyboard Component is now forceEndRoutine
    • forceEndTrialOnPress in Mouse Component is now forceEndRoutineOnPress
    • trialList and trialListFile in Builder are now conditions and conditionsFile, respectively
    • ‘window units’ to set Component units is now ‘from exp settings’ for less confusion
  • CHANGED numpy imports in Builder scripts:

    • only a subset of numpy features are now imported by default: numpy: sin, cos, tan, log, log10, pi, average, sqrt, std, deg2rad, rad2deg, linspace, asarray, random, randint, normal, shuffle
    • all items in the numpy namespace are available as np.*
    • if a pre-v1.70 script breaks due to this change, try prepending ‘np.’ or ‘np.random.’
  • CHANGED: Builder use of $. $ can now appear anywhere in the field (previously only the start). To display a ‘$’ character now requires ‘\$’ in a text field (to prevent interpretation of normal text as being code).

  • ADDED flexibility for start/stop in Builder Components. Can now specify stimuli according to;

    • variable values (using $ symbol). You can also specify an ‘expected’ time/duration so that something is still drawn on the timeline
    • number of frames, rather than time (s), for greater precision
    • an arbitrary condition (e.g. otherStim.status==STOPPED )
  • ADDED the option to use a raised cosine as a PatchStim mask (thanks Ariel Rokem)

  • ADDED a preference setting for adding custom path locations to Standalone PsychoPy

  • ADDED Dots Component to Builder interface for random dot kinematograms

  • ADDED wide-format data files (saveAsWideText()) (thanks Michael MacAskill)

  • ADDED option for full randomization of repeated lists (loop type ‘fullRandom’) (Jeremy)

  • ADDED builder icons can now be small or large (in prefs)

  • ADDED checking of conditions files for parameter name conflicts (thanks Jeremy)

  • ADDED emulate sync pulses and user key presses for fMRI or other scanners (for testing); see hardware/launchScan in the API reference, and Coder demos > experimental control > fMRI_launchScan.py (Jeremy)

  • ADDED right-clicking the expInfo in Experiment Settings tests & previews the dialog box (Jeremy)

  • ADDED syntax checking in code component dialog, right-click (Jeremy)

  • IMPROVED documentation (thanks Becky Sharman)

  • IMPROVED syntax for using $ in code snippets (e.g., “[$xPos, $yPos]” works) (Jeremy)

  • IMPROVED Flow and Routine displays in the Builder, with zooming; see the View menu for key-board shortcuts (Jeremy)

  • IMPROVED Neater (and slightly faster) changing of Builder Routines on file open/close

  • FIXED demos now unpack to an empty folder (Jeremy)

  • FIXED deleting an empty loop from the flow now works (Jeremy)

  • FIXED further issue in QUEST (the addition in 1.65.01 was being used too widely)

  • FIXED bug with updating of gamma grid values in Monitor Center

PsychoPy 1.65

PsychoPy 1.65.02

Released July 2011

  • FIXED Builder keyboard component was storing ‘all keys’ on request but not all RTs

  • FIXED Aperture Component in Builder, which was on for an entire Routine. Now supports start/stop times like other components

  • IMPROVED Sound stimuli in Builder:

    • FIXED: sounds could be distorted and would repeat if duration was longer than file
    • ADDED volume parameter to sound stimuli
    • FIXED: duration parameter now stops a file half-way through if needed
  • FIXED buglet preventing some warning messages being printed to screen in Builder experiments

  • FIXED bug in the copying/pasting of Builder Routines, which was previously introducing errors of the script with invalid _continueName values

PsychoPy 1.65.01

(Released July 2011)

  • FIXED buglets in QUEST handler (thanks Gerrit Maus)
  • FIXED absence of pygame in 1.65.00 Standalone release
  • ADDED shelve module to Standalone (needed by scipy.io)
  • ADDED warnings about going outside the monitor gamut for certain colors (thanks Alex Holcombe)

PsychoPy 1.65.00

(Released July 2011)

  • ADDED improved gamma correction using L=a+(b+kI)**G formula (in addition to industry-standard form). Existing gamma calibrations will continue to use old equation but new calibrations will take the new extended formula by default.
  • ADDED MultiStairHandler to run multiple interleaved staircases (also from the Builder)
  • ADDED createFactorialTrialList, a convenience function for full factorial conditions (thanks Marco Bertamini)
  • CHANGED Builder keyboard components now have the option to discard previous keys (on by default)
  • CHANGED RatingScale:
    • ADDED: argument to set lineColor independently (thanks Jeff Bye)
    • CHANGED default marker is triangle (affects windows only)
    • ADDED single-click option, custom-marker support
    • FIXED: bug with precision=1 plus auto-rescaling going in steps of 10 (not 1)
  • FIXED errors with importing from ‘ext’ and ‘contrib’
  • FIXED error in joystick demos
  • FIXED bug in ElementArrayStim depth
  • FIXED bug in misc.maskMatrix. Was not using correct scale (0:1) for the mask stage
  • FIXED buglet in StairHandler, which was only terminating during a reversal
  • FIXED bug when loading movies - they should implicitly pause until first draw() (thanks Giovanni Ottoboni)
  • IMPROVED handling of non-responses in Builder experiments, and this can now be the correct answer too (corrAns=None). ie. can now do go/no-go experiments. (Non-responses are now empty cells in excel file, not “–” as before.)

PsychoPy 1.64

PsychoPy 1.64.00

Released April 2011

  • ADDED option to return field names when importing a trial list (thanks Gary Lupyan)

  • ADDED Color-picker on toolbar for Coder and context menu for Builder (Jeremy Gray)

  • ADDED CustomMouse to visual (Jeremy Gray)

  • ADDED Aperture object to visual (thanks Yuri Spitsyn) and as a component to Builder (Jeremy Gray)

  • CHANGED RatingScale (Jeremy Gray):
    • FIXED bug in RatingScale that prevented scale starting at zero
    • ADDED RatingScale “choices” (non-numeric); text size, color, font, & anchor labels; pos=(x,y) (Jeremy Gray)
    • CHANGED RatingScale internals; renamed escapeKeys as skipKeys; subject now uses ‘tab’ to skip (Jeremy Gray)
  • ADDED user-configurable code/output font (see coder prefs to change)

  • ADDED gui.Dlg now automatically uses checkboxes for bools in inputs (Yuri Spitsyn)

  • ADDED RatingScale component for Builder (Jeremy Gray)

  • ADDED packages to Standalone distros:
    • pyxid (Cedrus button boxes)
    • labjack (good, fast, cheap USB I/O device)
    • egi (pynetstation)
    • pylink (SR Research eye trackers)
    • psignifit (bootstrapping, but only added on mac for now)
  • ADDED option for Builder components to take code (e.g. variables) as start/duration times

  • ADDED support for RGBA files in SimpleImageStim

  • IMPROVED namespace management for variables in Builder experiments (Jeremy Gray)

  • IMPROVED prefs dialog

  • IMPROVED test sequence for PsychoPy release (so hopefully fewer bugs in future!)

  • FIXED bug with ElementArrayStim affecting the subsequent color of ShapeStim

  • FIXED problem with the error dialog from Builder experiments not being a sensible size (since v1.63.03 it was just showing a tiny box instead of an error message)

  • FIXED Coder now reloads files changed outside the app when needed (thanks William Hogman)

  • FIXED Builder Text Component now respects the font property

  • FIXED problem with updating to a downloaded zip file (win32 only)

  • FIXED bug with ShapeStim.setOpacity when no shaders are available

  • FIXED long-standing pygame scaling bug

  • FIXED you can now scroll Builder Flow and still insert a Routine way to the right

PsychoPy 1.63

PsychoPy 1.63.04

Released Feb 2011

  • FIXED bug in windows prefs that prevents v1.63.03 from starting up
  • FIXED bug that prevents minolte LS100 from being found

PsychoPy 1.63.03

Released Feb 2011

  • ADDED Interactive shell to the bottom panel of the Coder view. Choose (in prefs) one of;
    • pyShell (the default, with great tooltips and help)
    • IPython (for people that like it, but beware it crashes if you create a psychopy.visual.Window() due to some threading issue(?))
  • ADDED scrollbar to output panel

  • FIXED small bug in QUEST which gave an incorrectly-scaled value for the next() trial

  • FIXED ElementArrayStim was not drawing correctly to second window in multi-display setups

  • FIXED negative sound durations coming from Builder, where sound was starting later than t=0

  • FIXED a problem where Builder experiments failed to run if ‘participant’ wasn’t in the experiment info dialog

PsychoPy 1.63.02

Released Feb 2011

  • ADDED clearFrames option to Window.saveMovieFrames

  • ADDED support for Spectrascan PR655/PR670

  • ADDED ‘height’ as a type of unit for visual stimuli

    NB. this is likely to become the default unit for new users (set in prefs) but for existing users the unit set in their prefs will remain. That means that your system may behave differently to your (new user) colleague’s

  • IMPROVED handling of damaged experiments in Builder (they don’t crash the app any more!)

  • IMPROVED performance of autoLogging (including demos showing how to turn of autoLog for dynamic stimuli)

PsychoPy 1.63.01

Released Jan 2011

  • FIXED bug with ElementArrayStim.setFieldPos() not updating

  • FIXED mouse release problem with pyglet (since in 1.63.00)

  • ADDED ability to retrieve a timestamp for a mouse event, similar to those in keyboard events.

    This is possible even though you may not retrieve the mouse event until later (e.g. waiting for a frame flip). Thanks Dave Britton

  • FIXED bug with filters.makeGrating: gratType=’sqr’ was not using ori and phase

  • FIXED bug with fetching version info for autoupdate (was sometimes causing a crash on startup

    if users selected ‘skip ths version’)

  • CHANGED optimisation routine from fmin_powell to fmin_bfgs. It seems more robust to starting params.

PsychoPy 1.63.00

Released Dec 2010

  • ADDED autoLog mechanism:
    • many more messages sent, but only written when log.flush() is called
    • rewritten backend to logging functions to remove file-writing performance hit
    • added autoLog and name attributes to visual stimuli
    • added setAutoDraw() method to visual stimuli (draws on every win.flip() until set to False)
    • added logNextFlip() method to visual.Window to send a log message time-stamped to flip
  • FIXED bug in color calibration for LMS color space (anyone using this space should recalibrate immediately) Thanks Christian Garber for picking up on this one.

  • FIXED bug with excel output from StairHandler

  • FIXED bug in ElemetArray.setSizes()

  • FIXED bug in running QuestHandler (Zarrar Shehzad)

  • FIXED bug trying to remove a Routine from Flow when enclosed in a Loop

  • FIXED bug with inseting Routines into Flow under Linux

  • FIXED bug with playing a MovieStim when another is already playing

  • CHANGED default values for Builder experiment settings (minor)

  • CHANGED ShapeStim default fillColor to None (from (0,0,0))

  • FIXED DotStim now supports a 2-element fieldSize (x,y) again

  • CHANGED phase of RadialStim to be ‘sin’ instead of ‘cosine’ at phase=0

  • FIXED rounding issue in RadialStim phase

  • FIXED ElementArrayStim can now take a 2x1 input for setSizes(), setSFs(), setPhases()

  • ADDED packages to standalone distributions: pyserial, pyparallel (win32 only), parallel python (pp), IPython

  • CHANGED Builder demos are now back in the distributed package. Use >Demos>Unpack... to put them in a folder you have access to and you can then run them from the demos menu

  • FIXED bug with ShapeStim initialisation (since 1.62.02)

  • UPDATED: Standalone distribution now uses Python2.6 and adds/upgrades;
    • parallel python (pp)
    • pyserial
    • ioLabs
    • ipython (for future ipython shell view in coder)
    • numpy=1.5.1, scipy=0.8.0, matplotlib=1.0
  • UPDATED: Builder demos

PsychoPy 1.62

PsychoPy 1.62.02

Released Oct 2010

  • FIXED: problem with RadialStim causing subsequent TextStims not to be visible
  • FIXED: bug with saving StairHandler data as .xlsx
  • ADDED: option for gui.fileOpenDlg and fileSaveDlg to receive a custom file filter
  • FIXED: builder implementation of staircases (initialisation was buggy)
  • FIXED: added Sound.setSound() so that sounds in builder can take new values each trial
  • FIXED: when a Routine was copied and pasted it didn’t update its name properly (e.g. when inserted into the Flow it kept the origin name)
  • FIXED: color rendering for stimuli on non-shader machines using dkl,lms, and named color spaces
  • ADDED: data.QuestHandler (Thanks to Zarrar Shehzad). This is much like StairHandler but uses the QUEST routine of Watson and Pelli
  • CHANGED: TextStim orientation now goes the other way, for consistency with other stimuli (thanks Manuel Spitschan for noticing)
  • FIXED: Problem with DotStim using ‘sqr’ fieldShape
  • ADDED: MovieStim now has a setMovie() method (a copy of loadMovie())
  • FIXED: problem with MovieStim.loadMovie() when a movie had already been loaded

PsychoPy 1.62.01

Released Sept 2010

  • ADDED: clicking on a Routine in the Flow window brings that Routine to current focus above

  • ADDED: by setting a loop in the Flow to have 0 repeats, that part of your experiment can be skipped

  • CHANGED: builder hides mouse now during fullscreen experiments (should make this a pref or setting though?)

  • FIXED: rendering problem with the Flow and Routine panels not updating on some platforms

  • ADDED: added .pause() .play() and .seek() to MovieStim (calling .draw() while paused will draw current static frame)

  • FIXED: bug in MovieStim.setOpacity() (Ariel Rokem)

  • FIXED: bug in win32 - shortcuts were created in user-specific start menu not all-users start menu

  • CHANGED: data output now uses std with N-1 normalisation rather than (scipy default) N

  • FIXED: bug when .psyexp files were dropped on Builder frame

  • FIXED: bug with Builder only storing last letter or multi-key button (e.g. ‘left’->’t’) under certain conditions

  • FIXED: when nReps=0 in Builder the loop should be skipped (was raising error)

  • CHANGED: mouse icon is now hidden for full-screen Builder experiments

  • FIXED: Builder was forgetting the TrialList file if you edited something else in the loop dialog

  • ADDED: visual.RatingScale and a demo to show how to use it (Jeremy Gray)

  • ADDED: The Standalone distributions now includes the following external libs:
    • pynetstation (import psychopy.hardware.egi)
    • ioLab library (import psychopy.harware.ioLab)
  • ADDED: trial loops in builder can now be aborted by setting someLoopName.finished=True

  • ADDED: improved timing. Support for blocking on VBL for all platforms (may still not work on intel integrated chips)

  • FIXED: minor bug with closing Coder windows generating spurious error messages

  • ADDED: ‘allowed’ parameter to gui.fileOpeNDlg and fileCloseDlg to provide custom file filters

PsychoPy 1.62.00

Released: August 2010

  • ADDED: support for Excel 2007 files (.xlsx) for data output and trial types input:
    • psychopy.data now has importTrialList(fileName) to generate a trial list (suitable for TrialHandler) from .xlsx or .csv files
    • Builder loops now accept either an xlsx or csv file for the TrialList
    • TrialHandler and StairHandler now have saveToExcel(filename, sheetName=’rawData’, appendFile=True). This can be used to generate almost identical files to the previous delimited files, but also allows multiple (named) worksheets in a single file. So you could have one file for a participant and then one sheet for each session or run.
  • CHANGED: for builder experiments the trial list for a loop is now imported from the file on every run, rather than just when the file is initially chosen

  • CHANGED: data for TrialHandler are now stored as masked arrays where possible. This means that trials with no response can be more easily ignored by analysis

  • FIXED: bug opening loop properties (bug introduced by new advanced params option)

  • FIXED: bug in Builder code generation for keyboard (only when using forceEnd=True but store=’nothing’)

  • CHANGED: RunTimeInfo is now in psychopy.info not psychopy.data

  • CHANGED: PatchStim for image files now defaults to showing the image at native size in pixels (making SimpleImageStim is less useful?)

  • CHANGED: access to the parameters of TrialList in the Builder now (by default) uses a more cluttered namespace for variables. e.g. if your TrialList file has heading rgb, then your components can access that with ‘$rgb’ rather than ‘$thisTrial.rgb’. This behaviour can be turned off with the new Builder preference ‘allowClutteredNamespace’.

  • FIXED: if Builder needs to output info but user had closed the output window, it is now reopened

  • FIXED: Builder remembers its window location

  • CHANGED: Builder demos now need to be fetched by the user - menu item opens a browser (this is slightly more effort, but means the demos aren’t stored within the app which is good)

  • CHANGED: loops/routines now get inserted to Flow by clicking the mouse where you want them :-)

  • ADDED: you can now have multiple Builder windwos open with different experiments

  • ADDED: you can now copy and paste Routines form one Builder window to another (or itself) - useful for reusing ‘template’ routines

  • FIXED: color of window was incorrectly scaled for ‘named’ and ‘rgb256’ color spaces

  • ADDED: quicktime movie output for OSX 10.6 (10.5 support was already working)

  • ADDED: Mac app can now receive dropped files on the coder and builder panels (but won’t check if these are sensible!!)

  • ADDED: debugMode preference for the app (for development purposes)

  • ADDED: working version of RatingStim

PsychoPy 1.61

PsychoPy 1.61.03

Patch released July 2010

  • FIXED: harmless error messages caused by trying to get the file date/time when no file is open
  • CHANGED: movie file used in movie demo (the chimp had unknown copyright)
  • FIXED: problem with nVidia cards under win32 being slow to render RadialStim
  • FIXED bug in filters.makeGrating where gratType=’sqr’
  • FIXED bug in new color spaces for computers that don’t support shaders
  • ADDED option to Builder components to have ‘advanced’ parameters not shown by default (and put this to use for Patch Component)

PsychoPy 1.61.02

Patch released June 2010

  • ADDED: Code Component to Builder (to insert arbitrary python code into experiments)
  • ADDED: visual.RatingScale ‘stimulus’ (thanks to JG). See ratingScale demo in Coder view
  • FIXED: TrialHandler can now have dataTypes that contain underscores (thanks fuchs for the fix)
  • FIXED: loading of scripts by coder on windows assumed ASCII so broke with unicode characters. Now assumes unicode (as was case with other platforms)
  • FIXED: minor bugs connecting to PR650

PsychoPy 1.61.01

Patch released May 2010

  • FIXED: Bug in coder spitting out lots of errors about no method BeginTextColor
  • FIXED: Buglet in rendering of pygame text withour shaders
  • FIXED: broken link for >Help>Api (reference) menuitem

PsychoPy 1.61.00

Released May 2010

  • CHANGED: color handling substantially. Now supply color and colorSpace arguments and use setColor rather than setRGB etc. Previous methods still work but give deprecation warning.
  • ADDED: Colors can now also be specified by name (one of the X11 or web colors, e.g. ‘DarkSalmon’) or hex color spec (e.g. ‘#E9967A’)
  • REMOVED: TextStimGLUT (assuming nobody uses GLUT backend anymore)
  • ADDED: ‘saw’ and ‘tri’ options to specify grating textures, to give sawtooth and triangle waves
  • FIXED: visual.DotStim does now update coherence based on setFieldCoherence calls
  • FIXED: bug in autoupdater for installs with setuptools-style directory structure
  • FIXED: bug in SimpleImageStim - when graphics card doesn’t support shaders colors were incorrectly scaled
  • CHANGED: console (stdout) default logging level to WARNING. More messages will appear here than before
  • ADDED: additional log level called DATA for saving data info from experiments to logfiles
  • ADDED: mouse component to Builder
  • ADDED: checking of coder script for changes made by an external application (thanks to Jeremy Gray)
  • ADDED: data.RuntimeInfo() for providing various info about the system at launch of script (thanks to Jeremy Gray)
  • FIXED: problem with rush() causing trouble between XP/vista (thanks to Jeremy Gray)
  • AMERICANIZATION: now consistently using ‘color’ not ‘colour’ throughout the project! ;-)
  • FIXED: problem with non-numeric characters being inserted into data structures
  • CHANGED: stimuli using textures now automatically clean these up, so no need for users to call .clearTextures()

PsychoPy 1.60

PsychoPy 1.60.04

Released March 2010

  • FIXED build error (OS X 10.6 only)

PsychoPy 1.60.03

Released Feb 2010

  • FIXED buglet in gui.py converting ‘false’ to True in dialogs (thanks Michael MacAskill)
  • FIXED bug in winXP version introduced by fixes to the winVista version! Now both should be fine!!

PsychoPy 1.60.02

Released Feb 2010

  • CHANGED ext.rush() is no longer run by default on creation of a window. It seems to be causing more probs and providing little enhancement.
  • FIXED error messages from vista/7 trying to import pywintypes.dll

PsychoPy 1.60.01

Released Feb 2010

  • FIXED minor bug with the new psychophysicsStaircase demo (Builder)
  • FIXED problem with importing wx.lib.agw.hyperlink (for users with wx<2.8.10)
  • FIXED bug in the new win.clearBuffer() method
  • CHANGED builder component variables so that the user inputs are interpretted as literal text unless preceded by $, in which case they are treated as variables/python code
  • CHANGED builder handling of keyboard ‘allowedKeys’ parameter. Instead of [‘1’,‘2’,’q’] you can now simply use 12q to indicate those three keys. If you want a key like ‘right’ and ‘left’ you now have to use $[‘right’,’left’]
  • TWITTER follow on http://twitter.com/psychopy
  • FIXED? win32 version now compatible with Vista/7? Still compatible with XP?

PsychoPy 1.60.00

Released Feb 2010

  • simplified prefs:
    • no more site prefs (user prefs only)
    • changed key bindings for compileScript(F5), runScript(Ctrl+R), stopScript(Ctrl+.)
  • ADDED: full implementation of staircase to Builder loops and included a demo for it to Builder

  • CHANGED: builder components now have a ‘startTime’ and ‘duration’ rather than ‘times’

  • ADDED: QuickTime output option for movies (OSX only)

  • ADDED: script is saved by coder before running (can be turned off in prefs)

  • ADDED: coder checks (and prompts) for filesave before running script

  • ADDED: setHeight to TextStim objects, so that character height can be set after initialisation

  • ADDED: setLineRGB, setFillRGB to ShapeStim

  • ADDED: ability to auto-update form PsychoPy source installer (zip files)

  • ADDED: Monitor Center can be closed with Ctrl-W

  • ADDED: visual.Window now has a setRGB() method

  • ADDED: visual.Window now has a clearBuffer() method

  • ADDED: context-specific help buttons to Builder dialogs

  • ADDED: implemented of code to flip SimpleImageStim (added new methods flipHoriz() and flipVert())

  • ADDED: Butterworth filters to psychopy.filters (thanks Yaroslav Halchenko)

  • ADDED: options to view whitespace, EOLs and indent guides in Coder

  • ADDED: auto-scaling of time axis in Routines panel

  • IMPROVED: Splash screen comes up faster to show the app is loading

  • FIXED: bug in RadialStim .set functions (default operation should be “” not None)

  • FIXED: on mac trying to save an unchanged document no longer inserts an ‘s’

  • FIXED: bug with SimpleImageStim not drawing to windows except #1

  • FIXED: one bug preventing PsychoPy from running on vista/win7 (are there more?)

  • CHANGED: psychopy.filters.makeMask() now returns a mask with values -1:1, not 0:1 (as expected by stimulus masks)

  • RESTRUCTURED: the serial package is no longer a part of core psychopy and is no longer required (except when hardware is actually being connected). This should now be installed as a dependency by users, but is still included with the Standalone packages.

  • RESTRUCTURED: preparing for further devices to be added, hardware is now a folder with files for each manufacturer. Now use e.g.:

    from psychopy.hardware.PR import PR650
    from psychopy.hardware.cedrus import RB730
    

PsychoPy 1.51.00

(released Nov 2009)

  • CHANGED: gamma handling to handle buggy graphics drivers on certain cards - see note below
  • CHANGED: coord systems for mouse events - both winTypes now provide mouse coords in the same units as the Window
  • FIXED: mouse in pyglet window does now get hidden with Window allowGUI=False
  • FIXED: (Builder) failed to open from Coder view menu (or cmd/ctrl L)
  • FIXED: failure to load user prefs file
  • ADDED: keybindings can be handled from prefs dialog (thanks to Jeremy Gray)
  • ADDED: NxNx3 (ie RGB) numpy arrays can now be used as textures
  • FIXED: MovieStim bug on win32 (was giving spurious avbin error if visual was imported before event)

NB. The changes to gamma handling should need no changes to your code, but could alter the gamma correction on some machines. For setups/studies that require good gamma correction it is recommended that you recalibrate when you install this version of PsychoPy.

PsychoPy 1.50

PsychoPy 1.50.04

(released Sep 09)

  • FIXED (Builder) bug with loading files (monitor fullScr incorrectly reloaded)
  • FIXED (Coder) bug with Paste in coder
  • FIXED (Builder) bug with drop-down boxes
  • FIXED (Builder) bug with removed routines remaining in Flow and InsertRoutineDlg
  • MOVED demos to demos/scripts and added demos/exps (for forthcoming Builder demos)
  • CHANGED (Builder) creating a new file in Builder (by any means) automatically adds a ‘trial’ Routine
  • FIXED (Builder) various bugs with the Patch component initialisation (params being ignored)
  • FIXED (Builder) better default parameters for text component

PsychoPy 1.50.02

(released Sep 09)

  • FIXED bug loading .psydat (files component variables were being saved but not reloaded)
  • removed debugging messages that were appearing in Coder output panel
  • FIXED long-standing problem (OS X only) with “save unchanged” dialogs that won’t go away
  • FIXED bug with ‘cancel’ not always cancelling on “save unchanged” dialogs
  • ADDED warning dialog if user adds component without having any routines
  • ADDED builder now remembers its location, size and panel sizes (which can be moved around)

PsychoPy 1.50.01

(released Sep 09)

  • FIXED problem creating prefs file on first use
  • FIXED problem with removing (identical) routines in Flow panel
  • FIXED problem with avbin import (OS X standalone version)

PsychoPy 1.50.00

(released Sep 09)

  • ADDED A preview of the new application structure and GUI
  • ADDED performance enhancements (OS X now blocks on vblank, all platforms rush() if user has permissions)
  • ADDED config files. These are already used by the app, but not the library.
  • ADDED data.getDateStr() for convenience
  • FIXED bug on certain intel gfx cards (shaders now require float extension as well as opengl2.0)
  • FIXED bug scaling pygame text (which caused pygame TextStims not to appear)
  • BACKWARDS NONCOMPAT: monitors is moved to be a subpackage of psychopy
  • BACKWARDS NONCOMPAT: added ‘all_mean’ (and similar) data types to TrialHandler.saveAsText and these are now default
  • ADDED TrialType object to data (extends traditional dicts so that trial.SF can be used as well as trial[‘SF’])
  • converted docs/website to sphinx rather than wiki (contained in svn)
  • FIXED bug with MovieStim not displaying correctly after SimpleImageStim
  • FIXED incorrect wx sizing of app(IDE) under OS X on opening
  • CHANGED license to GPL (more restrictive, preventing proprietary use)
  • CHANGED gui dialogs are centered on screen rather than wx default position
  • new dependency on lxml (for saving/loading builder files)

PsychoPy 1.00

PsychoPy 1.00.04

(released Jul 09)

  • DotStim can have fieldShape of ‘sqr’, ‘square’ or ‘circle’ (the first two are equiv)
  • CHANGED intepreters in all .py scripts to be the same (#!/usr/bin/env python). Use PATH env variable to choose non-default python version for your Python scripts
  • CHANGED pyglet textures to use numpy->ctypes rather than numpy->string
  • FIXED systemInfo assigned on Linux systems

PsychoPy 1.00.03

(released Jul 09)

  • FIXED initialisation bug with SimpleImageStimulus
  • FIXED “useShaders” buglet for TextStim
  • CHANGED IDE on win32 to run scripts as processes rather than imports (gives better error messages)
  • ADDED mipmap support for textures (better antialiasing for down-scaling)
  • CHANGED win32 standalone to include the whole raw python rather than using py2exe

PsychoPy 1.00.02

(released Jun 09)

  • ADDED SimpleImageStimulus for simple blitting of raw, unscaled images
  • ADDED collection of anonymous usage stats (e.g.: OSX_10.5.6_i386 1.00.02 2009-04-27_17:26 )
  • RENAMED DotStim.setDirection to setDir for consistency (the attribute is dir not direction)
  • FIXED bug with DotStim updating for ‘walk’ and ‘position’ noise dots (thanks Alex Holcombe)
  • FIXED bug with DotStim when fieldSize was initialised with a list rather than an array
  • FIXED buglet using event.getKeys in pygame (nothing fetched if pyglet installed)
  • CHANGED image loading code to check whether the image is a file, rather than using try..except
  • FIXED buglet raising trivial error messages on closing final window in IDE
  • FIXED problem pasting into find dlg in IDE

PsychoPy 1.00.01

(released Feb 09)

  • FIXED buglet in windows standalone installer

PsychoPy 1.00.00

  • ADDED ShapeStim, for drawing geometric stimuli (see demos/shapes.py and new clockface.py)
  • ADDED support for the tristate ctrl bit on parallel ports (thanks Gary Strangman for the patch)
  • ADDED standalone installer support for windows (XP, vista?)
  • FIXED minor bug in Window.flip() with frame recording on (average -> numpy.average)
  • FIXED minor bug in sound, now forcing pygame.mixer to use numpy (thanks Konstantin for the patch)
  • FIXED visual stimulus positions forced to be floats on init (thanks C Luhmann)

PsychoPy 0.97:

PsychoPy 0.97.01:

  • FIXED bug with IDE not closing properly (when current file was not right-most)
  • ADDED parallel.readPin(pinN) so that parallel port can be used for input as well as output
  • FIXED bug in parallel.setPortAddress(addr)
  • ADDED check for floats as arguments to ElementArrayStim set methods
  • CHANGED: frame time recording to be off by default (for plotting, for Window.fps() and for warnings). To turn it on use Window.setRecordFrameIntervals(True), preferably after first few frames have elapsed
  • IMPROVED detection of the (truly) dropped frames using log.console.setLevel(log.WARNING)
  • FIXED bug that was preventing bits++ from detecting LUT on the Mac (ensure screen gamma is 1.0 first)
  • FIXED buglet with .setRGB on stimuli - that method should require an operation argument (def=None)
  • ADDED fieldDepth and depths (for elements, releative to fieldDepth) as separate arguments to the ElementArrayStim

PsychoP 0.97.00:

  • ADDED options to DotStim motions. Two args have been added: * signalDots can be ‘different’ from or ‘same’ as the noise dots (from frame to frame) * noiseDots determines the update rule for the distractor dots (random ‘position’, ‘walk’, ‘direction’) * dotLife now works (was previously just a placeholder). Default is -1 (so should be same as before) see Scase, Braddick & Raymond (1996) for further info on the importance of these
  • ADDED options to event.getKeys * keyList to limit which keys are checked for (thanks Gary Strangman) * timeStamped=False/True/Clock (thanks Dave Britton)
  • CHANGED pyglet key checking now returns ‘1’ as the key irrespective of numpad or otherwise (used to return ‘1’ or ‘NUM_1’)
  • FIXED bug in event.py for machines where pyglet is failing to import
  • REMOVED AlphaStim (after a long period of ‘deprecated’)

PsychoPy 0.96:

PsychoPy 0.96.02:

  • FIXED bug introduced with clipping of text in 0.96.01 using textstimuli with shaders under pygame
  • FIXED bug with rendering png alpha layer using pyglet shaders

PsychoPy 0.96.01:

  • FIXED problem with write errors running demos from Mac IDE
  • ADDED frameWidth to textStim for multiline
  • ADDED setRecordFrameIntervals, saveFrameTimes() to Window and misc.plotFrameIntervals()
  • FIXED had accidentally made pygame a full dependency in visual.py
  • FIXED MovieStim was being affected by texture color of other stimuli
  • FIXED window now explicitly checks for GL_ARB_texture_float before using shaders

PsychoPy 0.96.00:

  • FIXED pygame back-end so that can be used as a valid alternative to pyglet (requires pygame1.8+ and PyOpenGL3.0+, both included in mac app)
  • CHANGED default sound handler to be pygame again. Although pyglet looked promising for this it has turned out to be buggy. Timing of sounds can be very irregular and sometimes they don’t even play Although pygame has longer overall latencies (20-30ms) it’s behaviour is at least robust. This will be revisited one day when i have time to write driver-specific code for sounds
  • FIXED image importing - scaling from square image wasn’t working and CMYK images weren’t imported properly. Both are now fine.

PsychoPy 0.95:

PsychoPy 0.95.11:

  • ADDED sound.Sound.getDuration() method
  • FIXED spurious (unimportant but ugly) error messages raised by certain threads on core.quit()

PsychoPy 0.95.9:

  • FIXED further bug in sound.Sound on win32 (caused by thread being polled too frequently)
  • FIXED new bug in notebook view (introduced in 0.95.8)

PsychoPy 0.95.8:

  • FIXED bug in sound.Sound not repeating when play() is called repeatedly
  • IDE uses improved notebook view for code pages
  • IDE line number column is larger
  • IDE SaveAs no longer raises (inconsequential) error
  • IDE Cmd-S or Ctrl-S now clears autocomplete

PsychoPy 0.95.7:

  • ADDED misc.cart2pol()
  • ADDED highly optimised ElementArrayStim, suitable for drawing large numbers of elements. Requires fast OpenGL 2.0 gfx card - at least an nVidia 8000 series or ATI HD 2600 are recommended.
  • FIXED bug in calibTools with MonitorFolder (should have been monitorFolder)
  • FIXED bug in Sound.stop() for pyglet contexts
  • FIXED bug in running scripts with spaces in the filename/path (Mac OS X)

PsychoPy 0.95.6:

  • DISABLED the setting of gamma if this is [1,1,1] (allows the user to set it from a control panel and not have this adjusted)
  • FIXED gamma setting on linux (thanks to Luca Citi for testing)
  • FIXED bug in TextStim.setRGB (wasn’t setting properly after text had been created)
  • FIXED bug searching for shaders on ATI graphics cards
  • FIXED - now no need to download avbin for the mac IDE installation

PsychoPy 0.95.5:

  • FIXED bug in event.clearEvents() implementation in pyglet (wasn’t clearing)
  • FIXED - psychopy no longer disables ipython shortcut keys
  • FIXED bug in sound.Sound initialisation without pygame installeds
  • ADDED core.rush() for increasing thread priority on win32
  • ADDED Window._haveShaders, XXXStim._useShaders and XXXStim.setUseShaders
  • FIXED crashes on win32, running a pyglet context after a DlgFromDict
  • ADDED gamma correction for pyglet contexts (not tested yet on linux)

PsychoPy 0.95.4:

  • CHANGED PsychoPy options (IDE and monitors) now stored the following, rather than with the app. (monitor calib files will be moved here if possible)
    • ~/.PsychoPy/IDE (OS X, linux)
    • <Docs and Settings>/<user>/Application Data/PsychoPy
  • FIXED bug in text rendering (ATI/win32/pyglet combo only)

  • FIXED minor bug in handling of images with alpha channel

PsychoPy 0.95.3:

  • ADDED a .clearTextures() method to PatchStim and RadialStim, which should be called before de-referencing a stimulus
  • CHANGED input range for numpy array textures to -1:1
  • ADDED sysInfo.py to demos

PsychoPy 0.95.2:

  • FIXED quitting PsychoPyIDE now correctly cancels when saving files

PsychoPy 0.95.1:

  • FIXED problem with saving files from the IDE on Mac
  • FIXED Cmd-C now copies from the output window of IDE
  • even nicer IDE icons (thanks to the Crystal project at everaldo.com)
  • FIXED bug in the shaders code under pyglet (was working fine in pygame already)
  • (refactored code to use a template visual stimulus)

PsychoPy 0.95.0:

  • FIXED linux bug preventing repeated dialogs (thanks Luca Citi)
  • REWRITTEN stimuli to use _BaseClass, defining ._set() method
  • MAJOR IMPROVEMENTS to IDE: * Intel mac version available as app bundle, including python * FIXED double help menu * cleaned code for fetching icons * fixed code for updating SourceAssistant (now runs from .OnIdle())

Older

PsychoPy 0.94.0:

  • pyglet: * can use multiple windows and multiple screens (see screensAndWindows demo) * sounds are buffered faster and more precisely (16ms with <0.1ms variability on my system) * creating sounds in pyglet starts a separate thread. If you use sounds in your script you must call core.quit() when you’re done to exit the system (or this background thread will continue). * pyglet window.setGamma and setGammaRamp working on win and mac (NOT LINUX) * pyglet event.Mouse complete (and supports wheel as well as buttons) * pyglet is now the default context. pygame will be used if explicitly called or if pyglet (v1.1+) isn’t found * pyglet can now get/save movie frames (like pygame) * TextStims are much cleaner (and a bit bigger?) Can use multiple lines too. New method for specifying font
  • added simpler parallel.py (wraps _parallel which will remain for now)
  • removed the C code extensions in favour of ctypes (so compiler no longer necessary)
  • converted “is” for “==” where appropriate (thanks Luca)
  • Window.getMovieFrame now takes a buffer argument (‘front’ or ‘back’)
  • monitor calibration files now stored in HOME/.psychopy/monitors rather than site-packages
  • Window.flip() added and supports the option not to clear previous buffer (for incremental drawing). Window.update() is still available for now but can be replaced with flip() commands
  • updated demos

PsychoPy 0.93.6:

  • bug fixes for OS X 10.5 and ctypes OpenGL
  • new improved OS X installer for dependencies
  • moved to egg for OS X distribution

PsychoPy 0.93.5:

  • added rich text ctrl to IDE output, including links to lines of errors

  • IDE now only opens one copy of a given text file

  • improved (chances of) sync-to-vertical blank on windows without adjusting driver settings (on windows it’s still better to set driver to force sync to be safe!)

  • added center and radius arguments to filters.makeMask and filters.makeRadialMatrix

  • implemented pyglet backend for;
    • better screen handling (can specify which screen a window should appear in)
    • fewer dependencies (takes care of pygame and opengl)
    • faster sound production
    • TextStims can be multi-line
    • NO GAMMA-SETTING as yet. Don’t use this backend if you need a gamma-corrected window and aren’t using Bits++.
  • changed the behaviour of Window winTypes

    If you leave winType as None PsychoPy tries to use Pygame, Pyglet, GLUT in that order (when Pyglet can handle gamma funcs it will become default). Can be overridden by specifying winType.

  • turned off depth testing for drawing of text (will simply be overlaid in the order called)

  • changes to TextStim: pyglet fonts are loaded by name only, not filename. PsychoPy TextStim now has an additional argument called ‘fontFiles=[]’ to allow the adding of custom ttf fonts, but the font name should be used as the font=” ” argument.

  • updated some of the Reference docs

PsychoPy 0.93.3:

  • fixed problem with ‘dynamic loading of multitextureARB’ (only found on certain graphics cards)

PsychoPy 0.93.2:

  • improved detection of non-OpenGL2.0 drivers

PsychoPy 0.93.1:

  • now automatically uses shaders only if available (older machines can use this version but will not benefit from the speed up)
  • slight speed improvement for TextStim rendering (on all machines)

PsychoPy 0.93.0:

  • new requirement of PyOpenGL3.0+ (and a graphics card with OpenGL2.0 drivers?)
  • much faster implementation of setRGB, setContrast and setOpacity (using fragment shaders)
  • images (and other textures) need not be square. They will be automatically resampled if they arent. Square power-of-two image textures are still recommended
  • Fixed problem in calibTools.DACrange caused by change in numpy rounding behaviour. (symptom was strange choice of lum values for calibrations)
  • numpy arrays as textures currently need to be NxM intensity arrays
  • multitexturing now handled by OpenGL2.0 rather than ARB
  • added support for Cedrus response pad
  • if any component of rgb*contrast>1 then the stimulus will be drawn as low contrast and b/y (rgb=[0.2,0.2,-0.2]) in an attempt to alert the user that this is out of range

PsychoPy 0.92.5:

  • Fixed issue with stairhandler (it was terminating based only on the nTrials). It does now terminate when both the nTrials and the nReversals [or length(stepSizes) if this is greater] are exceeded.
  • Minor enhancements to IDE (added explicit handlers to menus for Ctrl-Z, Ctrl-Y, Ctrl-D)

PsychoPy 0.92.4:

  • fixed some source packaging problems for linux (removed trademark symbols from serialposix.py and fixed directory capitalisation of IDE/Resources in setup.py). Thanks to Jason Locklin and Samuele Carcagno for picking them up.
  • numerous minor improvements to the IDE
  • reduced the buffer size of sound stream to reduce latency of sound play
  • fixed error installing start menu links (win32)

PsychoPy 0.92.3:

  • new source .zip package (switched away from the use of setuptools - it didn’t include files properly in a source dist)
  • Fixed problem on very fast computers that meant error messages weren’t always displayed in the IDE

PsychoPy 0.92.2:

  • have been trying (and failing) to make scripts run faster from the IDE under Mac OS X. Have tried using threads and debug modules (which would mean you didn’t need to import all the libs every time). All these work fine under win32 but not under OS X every time :-( If anyone has a new idea for how to run a pygame window in the same process as the IDE thread I’d love hear it
  • removed the messages from the new TextStim stimuli
  • fixed bug in IDE that caused it to crash before starting if pythonw.exe was run rather than python.exe on first run(!)
  • improvements to the source assistant window (better help and now fetches function arguments)

Known Problems: * The IDE isn’t collecting all errors that are returned - a problem with the process redirection mechanism? FIXED in 0.92.3

PsychoPy 0.92.1

  • fixed minor bug in IDE - wouldn’t open if it had been closed with no open docs.
  • fixed problem with pushing/popping matrix that caused the stimuli to disappear (only if a TextStim was rendered repeatedly)

PsychoPy 0.92.0:

  • ‘sequential’ ordering now implemented for data.TrialHandler (thx Ben Webb)
  • moved to pygame fonts (with unicode support and any TT font onthe system). The switch will break any code that was using TextStim with lineWidth or letterWidth as args. Users wanting to continue using the previous TextStim can call textStimGLUT instead (although I think the new pygame fonts are superior in every way).
  • improved IDE handling of previous size (to cope with being closed in the maximised or minimised state, which previously caused the window not to return)

PsychoPy 0.91.5:

  • fixed minor bug in using numpy.array as a mask (was only working if array was 128x128)
  • faster startup for IDE (added threading class for importing modules)
  • fixed very minor bug in IDE when searching for attributes that dont exist
  • fixed minor bug where scripts with syntax errors didn’t run but didn’t complain either
  • IDE FileOpen now tries the folder that the current file is in first
  • IDE removed threading class for running scripts

PsychoPy 0.91.4

  • fixed the problem of stimulus order/depth. Now the default depth is set (more intuitively) by the order of drawing not creating.
  • IDE added recent files to file menu
  • IDE minor bug fixes
  • IDE rewrite of code inspection using wx.py.instrospect

PsychoPy 0.91.3

  • added find dialog to IDE
  • added ability of data.FunctionFromStaircase to create unique bins rather than averaging several x values. Give bins=’unique’ (rather than bins=someInteger). Also fixed very minor issue where this func would only take a list of lists, rather than a single list.

PsychoPy 0.91.2

  • fixed IDE problem running filenames containing spaces (only necessary on win32)

PsychoPy 0.91.1

  • added reasonable SourceAssistant to IDE
  • added a stop button to abort scripts in IDE
  • IDE scripts now run as sub process rather than within the main process: slower but safer
  • added an autoflushing stdout to psychopy.__init__. Where lots of text is written to stdout this may be a problem, but turing it off means that stdout doesn’t get properly picked up by the IDE :-(

PsychoPy 0.91.0

  • PsychoPy now has its own IDE!! With syntax-highlighting, code-folding and auto-complete!! :-)
  • gui.py had to be refactored a little but (I think) should not be noticed by the end user (gui.Dlg is now a subclass of wx.Dialog rather than a modified instance)
  • gui.Dlg and DlgFromDict now end up with an attribute .OK that is either True or False
  • fixed bug in data.StairHandler that could result in too many trials being run (since v0.89)

PsychoPy 0.90.4

  • resolved deprecation warning with wxPython (now using “import wx”)

PsychoPy 0.90.3

  • used the new numpy.mgrid commands throughout filters and visual modules
  • sorted out the rounding probs on RadialStim
  • fixed import bug in calibtools.py

PsychoPy 0.90.2

  • fixed new bug in the minVal/maxVal handling of StairHandler (where these have not been specified)
  • changed the default console log level to be ERROR, due to too much log output!

PsychoPy 0.90.1

  • fixed new bug in Sound object
  • changed the default log file to go to the script directory rather than site-packages/psychopy

PsychoPy 0.90

  • sounds now in stereo and a new function to allow you to choose the settings for the sound system.

  • LMS colors (cone-isolating stimuli) are now tested and accurate (when calibrated)

  • added logging module (erros, warnings, info). And removed other messages:
    • @Verbose@ flags have become log.info messages
    • @Warn@ commands have become log.warning messages
  • added minVal and maxVal arguments to data.StairHandler so that range can be bounded

  • @import psychopy@ no longer imports anything other than core

Psychopy 0.89.1

  • fixed bug in new numpy’s handling of bits++ header

Psychopy 0.89

  • optimised DotStim to use vertex arrays (can now draw several thousand dots)
  • optimised RadialStim to use vertex arrays (can increase radial resolution without much loss)

Psychopy 0.88

  • fixed problem with MonitorCenter on OSX (buttons not working on recent version of wxPython)

Psychopy 0.87

  • added sqrXsqr to RadialStim and made it default texture
  • fixed a minor bug in RadialStim rendering (stimuli failed to appear under certain stimulus orderings)
  • changed RadialStim size parameter to be diameter rather than radius (to be like AlphaStim)
  • namechange: introduced PatchStim (currently identical to AlphaStim which may one day become deprecated)

Psychopy 0.86

  • distributed as an .egg

Psychopy 0.85

  • upgraded for numpy1.0b and scipy0.50. Hopefully those packages are now stable enough that they won’t need further PsychoPy compatibility changes

Psychopy 0.84

  • NEW (alpha) support for radial patterns rather than linear ones
  • changed Clock behaviour to use time.clock() on win32 rather than time.time()
  • fixed a bug in the shuffle seeding behaviour
  • added a noise pattern to bacground in monitor calibration

Psychopy 0.83

  • NEW post-install script for Win32 installs shortcuts to your >>Start>Programs menu

  • NEW parallel port code (temporary form) using DLportIO.dll can be found under _parallel

  • NEW hardware module with support for fORP response box (for MRI) using serial port

  • added iterator functionality to data.TrialHandler and data.StairHandler you can now use ::

    for thisTrial in allTrials:

but a consequence was that .nextTrial() will be deprecated in favour of .next(). Also, when the end of the trials is reached a StopIteration is raised. * added the ability to seed the shuffle mechanism (and trial handler) so you can repeat experiments with the same trial sequence

Psychopy 0.82

  • rewritten code for bits++ LUT drawing, raised by changes in pyOpenGL(2.0.1.09) call to drawpixels
  • minor change to exit behaviour. pyGame.quit() is now called and then sys.exit(0) rather than sys.exit(1)
  • bug fixes in type handling (from Numeric to numpy)

Psychopy 0.81

  • changes to gui caused by new threading behaviour of wxPython and PyGame (DlgFromDict must now be a class not a function).

Psychopy 0.80

  • switching numeric code to new python24 and new scipy/numpy. MUCH nicer
  • new (reduced requirements): * numpy 0.9 or newer (the replacement for Numeric/numarray) * numpy 0.4.4 or newer * pyOpenGL * pygame * PIL * matplotlib (for data plotting)

PsychoPy 0.72

  • tested (and fixed) compatibility with wxPython 2.6. Will now be using this as my primary handler for GUIs
  • ADDED ability to quit during run of getLumSeries

PsychoPy 0.71

  • FIXED filename bug in makeMovies.makeAnimatedGIF
  • slight change to monitors that it uses testMonitor.calib as a default rather than default.calib (testMonitor.calib is packaged with the installation)

PsychoPy 0.70

  • FIXED bug in setSize. Wasn’t updating correctly
  • ADDED ability to append to a data file rather than create new
  • bits.lib (from CRS) is now distributed directly with psychopy rather than needing separate install)
  • ADDED db/log/linear step methods to StairHandler
  • ADDED logistic equation to data.FitFunction

PsychoPy 0.69

  • ADDED a testMonitor to the monitors package so that demos can use it for pseudo*calibrated stimuli.
  • REDUCED the attempt to use _bits.pyd. Was only necessary for machines that had bits++ monitor center
  • ADDED basic staircase method
  • CHANGED dlgFromDict to return None on cancel rather than 0
  • CHANGED the description of sin textures so that the centre of the patch had the color of dkl or rgb rather than the edge. (Effectively all sin textures are now shifted in phase by pi radians). -Demos removed from the main package - now ONLY distributed as a separate library

PsychoPy 0.68

  • FIXED toFile and fromFile so they work!?
  • Demos being distributed as a separate .zip file (may be removed from the main package someday)

PsychoPy 0.67

  • ADDED toFile, fromFile, pol2cart functions to psychopy.misc
  • CHANGED waitKeys to return a list of keys (usually of length one) so that it’s compatible with getKeys

PsychoPy 0.66

  • serial is now a subpackage of psychopy and so doesn’t need additional installation
  • REMOVED the code to try and query the graphics card about the scr dimensions. From now on, if yo uwish to use real world units, you MUST specify scrWidthPIX and scrWidthCM when you make your visual.Window
  • ADDED flag to data output to output matrixOnly (useful for matlab imports)
  • REVERTED the default numeric handler to be Numeric rather than numarray (because it looks like numarray hasn’t taken off as much as thought)
  • FIXED minor bug in text formatting for TrialHandler.saveAsText()
  • CHANGED visual.Window so that the monitor argument prefers to receive a Monitor object (rather than just a dictionary) or just the name of one. MonitorCenter makes it so easy to create these now that they should be the default.
  • CHANGED Photometer initialisation behaviour - used to raise an error on a fail but now sets an internal attribute .OK to False rather than True

PsychoPy 0.65

  • MonitorCenter now complete. Plots and checks gamma correction.
  • can write movies out to animated gifs(any platform) or mpg/avi (both windows only)

PsychoPy 0.64

  • ChANGED monitor key dkl_rgb_matrix to dkl_rgb (also for lms)
  • ADDED code for PR650 to get the monitor color calibration and calculate the color conversion matrices automatically. Will be implemented via the MonitorCenter application.
  • ADDED pyserial2.0 as a subpackage of psychopy so that it needn’t be separately installed
  • Much improved MonitorCenter with DKL and LMS calibration buttons and matrix output
  • Double-click installer for Mac now available

PsychoPy 0.63

  • ADDED ability to capture frames from the window as images (tif, jpg...) or as animated GIF files :) see demo
  • ADDED ability for elements in DotStim to be any arbitrary stimulus with a methods for .setPos(), .draw()

PsychoPy 0.62

  • FIXED the circular mask for DotStim
  • FIXED bug in the new text alignment method (was being aligned but not positioned?!)

PsychoPy 0.61

  • FIXED minor bug in MonitorCenter (OS X only)

PsychoPy 0.60

  • ADDED a GUI application for looking after monitors and calibrations. SEE MonitorCenter.py in the new package monitors
  • MOVED “psychopy.calib” subpackage to a whole separate package “monitors”. Calibration files will now be stored alongside the calibration code. This makes it easier to develop the new calibration GUI application that I’m working on. Also means that if you delete the psychopy folder for a new installation you won’t lose your calibration files.
  • ADDED optional maxWait argument to event.waitKeys()
  • CHANGED TextStim to take the font as a name rather than font number
  • ADDED alignment to text stimuli (alignVert, alignHoriz)
  • CHANGED waitKeys to implicitly clear keys from the event queue so that it only finds the first keypress after its called. As result it now returns a single character rather than list of them
  • CHANGED visual.Window so that it no longer overrides monitor settings if arguments are specified. Easy now to create a monitor in the monitors GUI and use that instead
  • ADDED the circular mask for DotStimulus

Resources (e.g. for teaching)

There are a number of further resources to help learn/teach about PsychoPy.

If you also have PsychoPy materials/course then please let us know so that we can link to them from here too!

P4N 2015: Python for Neuroscience (and Psychology)

There will be a 3-day workshop in April 2014 at Nottingham University. It won’t be only about PsychoPy, but about Python for science more generally and focussing on coding rather than using the Builder interface. We hope this year to run intermediate and novice sessions in parallel (rather than novice only).

Youtube tutorials

Materials for Builder

Materials for Coder


Previous events

Please remember to cite PsychoPy