Psychophysics as a discipline aims to understand the relationship between a physical stimulus and the behavioural response of an organism. The function that relates the stimulus level or intensity to the behavioural response is called psychometric function, where the behavioral response is usually specified in terms of probability of choosing one of several possible alternatives (e.g. correct/incorrect, yes/no, left/right, ...). In order to describe this relationship the psychophysicist collects data (in an empirical experiment with a behaving subject), counting how often the subject identifies (discriminates or detects) a stimulus at different intensities. When using the method of constant stimuli this leads to a binomial dataset with K blocks containing the number of trials and a given number of successes at each one of a variety of stimulus levels. Typically such datasets are summarized by fitting a sigmoidal nonlinearity, for example a logistic function, thereby describing the data with a parametric model. This parametric model can then be used to derive certain summary statistics about the dataset, most importantly the threshold for the modality being investigated. The threshold is often defined as the stimulus intensity at which the subject is just able to detect or discriminate a stimulus. Such thresholds include: the minimum number of photons required to elicit a visual experience and the minimum sound level of a pure tone that an average ear with normal hearing can hear with no other sound present.
Methodologically Psignifit 3 fits psychometric functions using a constrained maximum likelihood approach to estimate thresholds. Furthermore it offers a goodness of fit assessment and additionally, two methods for obtaining variability estimates for the fitted parameters. The first method, the bootstrap has been the state of the art for psychometric functions for the last 10 years. The second, a Bayesian inference approach that uses Markov chain Monte Carlo (MCMC). Lastly there are some additional methods to identify non-stationarities in the data, as well as identifying possible outliers and influential observations.
Architecturally Psignifit 3  is implemented in a modular fashion consisting of several components. The main computational routines are written in C++ (Psi++). A interface layer is generated with swig (swignifit), and the main user-interface and plotting functions are written in Python (pypsignifit) using numpy and matplotlib. I will talk a little bit about the benefits and drawbacks that this layered approach brings, and how we have thought about moving to Cython many times.
As as side note, there is also a Matlab interface (mpsignifit). While this does not support the full set of functionality offered by the Python-Interface, it does allow for access to the core computational routines . We mainly provide this since the predecessor, Psignifit 2 , is written in that language, and we do want to provide a certain degree of backwards compatibility.
In terms of best-practices for open-source software engineering our project is not doing doing too badly. The code is hosted on Sourceforge, we use Git as a version control-system and there are a number of unit tests on all three architectural levels. Furthermore there is a somewhat active project mailinglist and we have had one sprint.
As for users and publications: there are several well known users that have used Psignifit for their work and our lab-colleagues use the toolbox and regularly report issues and bugs. There are between 50-100 Downloads per month from the Sourceforge site and sofar the toolbox has been featured in one Masters-Thesis  and one journal paper which has recently been accepted for publication .
  
Haenel, N.V. A Large-Scale Comparison of Classical Bootstrap Confidence Intervals to Bayesian Credible Intervals for Psychometric Functions. M.Sc. Thesis, Berenstein Center for Computational Neuroscience Berlin, Apr. 2011
Fründ, I, Haenel, NV, Wichmann, FA (2011). Inference for psychometric functions in the presence of nonstationary behavior. Journal of Vision, in press.