Thursday, December 19, 2013

Latest news from Q-interactive

Pearson is sending out a new version of Q-interactive. There are two changes that may be of interest:

1. Q-interactive now supports an iPad Mini (for the examiner only) and iPad Air.
2. The manuals for all the tests are now available online.

Good changes.


Thursday, December 12, 2013

A low tech gadget to increase food intake in Alzheimer's Disease

Apparently serving food on red plates make DAT patients eat 25% more. That is a lot! The simple explanation is that they can see the food better because of better contrast. For some more information, follow the link:

A bonus goodie: a blog on caring for somebody with Lewy Body Dementia
(yes, that's where I found the red plates)


Wednesday, December 11, 2013

Izabela visits the Wild West

I have promised this post for a while, but to get information, I had to make my old PC work again, and that was not an easy thing. Apologies for the delay.

My first trip into the wild west of brain training was going through the Driving module from Posit Science. It is no longer available in its original form, because Posit Science's offerings are now on-line, and available for a cheap monthly subscription. It wasn't the case when I started playing with it. The training programs arrived on disks and cost well over a thousand dollars. I decided to try out the Driving module, as it was a small subset of exercises, and considerably cheaper. Also, at the time I have just experienced a nasty close call with a motorcyclist, and was feeling rather anxious on the road.

The package contained visual training tasks. The three that I have data on are:

- Sweep Seeker - this was an exercise in which one had to decide on a direction of movement of alternating gray and white bands. I have very vague memory of movement perception research using similar stimuli - maybe somebody can give us some info in the comments? You can read about the exercise here . It is supposed to increase visual processing speed.

Jewel diver - keeping track of multiple moving objects among distractors. Similar exercise available from BrainHQ is explained here. This was supposed to train divided visual attention.

Road Tour - in this exercise one needed to notice things in the visual periphery. It trained up what is called a useful field of view. Basically useful field of view is an area from which you can obtain information at one glance. Research shows that it is closely related to driving safety. I know that it is one of the aspects OTs may assess in relation to driving.

I think there were other tasks, but in the end, I did not practice them enough, so have no test-retest data.

The package was fairly rigorous - you were supposed to do it every day, and there was a reasonable amount of material to be done.

It started from a pre-test, on which I scored well (cannot remember details), apart from useful field of view task, which was picked up as a problem. Unfortunately, the program did not keep percentile rankings, I only have the initial speed (useful field of view is measured in how long you need to see a picture to pick up detail in the periphery). For those who may make up anything of it, it was 554ms, and apparently it wasn't that good.

When I dug out my data the other week, I realised that I never finished the whole package. I stopped just after half-way. However, I had re-test data on the three tasks:

Sweep Seeker: baseline 46 ms, 11% improvement, training time 1h, 30 minutes
Jewel Diver: baseline 5.6 objects, improvement to 6.2 objects, 12 % improvement
Road Tour - baseline 554 ms, improvement 69%, time of training 2h 20 minutes.

Yes, the interesting one is the Road Tour with an improvement of almost 70%.

At that point in the training, I went away for a few days, did a lot of driving and noticed a difference in what I saw. I felt as if my eyes got slit at the outside corners, with me seeing a lot more to the side. I got very happy with the result, decided myself improved, and did not bother to return to training after the holiday, especially as my driving anxiety seemed to disappear around this time.

This whole thing was a bit weird. While I expected improvement, I did not expect dramatic improvement. 69% is a lot, especially in less than 3 hours of training. And the improvement was steady, spread over sessions, rather than being due to getting acquainted with the task and appearing in the first few days.

The explanation I settled on was this:
I wear glasses and the periphery of my vision is either uncorrected or covered by frames. This means that peripheral information is significantly degraded. My brain may well have learned to happily ignore it, because there were no goodies there.  In this context, training peripheral field of view simply re-activated attention to the periphery. As things were disused rather than impaired, the reactivation resulted in quick and significant change.
So, this is my hypothesis. I'm sure you can think of others and may be right at that. And this was an n=1 experiment with a non-objective experimenter, so the results are a bit iffy. But it was an interesting process.

I would like to re-test myself to see how much of this improvement in useful field of view stayed with me. Unfortunately, while I have the program, it would force me to do some practice before it'd retest me, defeating the goal of the exercise. If somebody has access to field of view, this lab rat will be happy to have a go.


PS1: there are changes between the old product and BrainHQ, mainly in intensity, that may have lowered the efficiency of training.

PS2: Not all brain training is so effortless or quick, of which more in a future post about my second trip into the Wild West

Q-interactive Hacks

For those who are considering moving to Q-interactive, here are some hacks and tips:

1. While Pearson claims not to support iPad mini, the mini works exactly like the bigger models. The only difference is the size of pixels - so the same screen real estate fits on a smaller device. All the apps for the big iPad work on the small one as well.
While the iPad for presenting information to clients needs to be full-size, the clinician's iPad can be a mini. I have been using the mini for quite a while now and have had no trouble.
Why would mini be better? For me it is the weight of my handbag. This iPad goes everywhere with me, because I can access my practice calendar, room booking and a few other important goodies from it. And after a few hours carrying it with me, the weight and size difference really matter. So if you are thinking of buying two iPads for the Q-interactive, it may be worthwhile to consider a mini as an option.

2. With thanks to Debbie Anderson who mentioned this hack at the latest CCN Conference:
Those of us who routinely plug in all the scores into a WAIS-WMS scoring program may as well save $4.40 per client by not administering SS and Cd on the iPad. You just need a timer to get a raw score, which you then put into the scoring program.

3. Also with thanks to Debbie Anderson, who devised this hack:
After each subtest, the iPad provides raw and scaled scores, which is very nice. It is useful to record these on a piece of paper. This is a backup in case of iPad malfunction (although it is a solid program, with no problems to date).  Also, it provides a convenient summary of scores to be plugged into the scoring program without opening the Q-interactive and digging into its bowels. I have made myself a nice recording sheet, with space for behavioural observations and find it very convenient.

4. If you are planning to buy the subscription in the near future, I have heard that it is worthwhile to do it now and pay before the end of the year, because the inclusion of the goodies that Pearson supplies with the subscription is finishing. The goodies include a pack each of all the forms you may be using with the Q-interactive, and the value considerably exceeds the price of subscription. I've heard that if you pay this year, you may set it up in such a way that the subscription time starts a bit later. Handy if you don't want to pay for holiday time. Also, you may want to give yourself time to use that free 1 month trial to skill up on the new way of administering things.


Sunday, November 3, 2013

The trip into the Wild West continues

There were some great posts at npinoz after I posted about this series. Nicola Gates talked about the standards of studies that we need to take into account when reviewing research, and Jonathan Foster posted a link to an article he recently published (see This is really good stuff.

I have to admit that I am not going to be as rigorous as Nicola suggests (but maybe she will consider giving us a review). Today I am going to talk about two articles that made an impression on me.

The first study was reviewed by Jonathan in his article, and has been mentioned on npinoz before.
This is Owen, A.M., Hampshire, A., Grahn, J.A., Stenton, R., Dajani, S., Burns, A.S., Howard, R.JH., Ballard, C.G. (2010), Putting brain training to the test. Nature, 465, 775-778. Free full text to be found here). It was a huge study, with 11, 430 participants, and it found no effect of brain training at all.

The study used viewers of a popular BBC series, who did a 6-week online brain training program. They used four tests at pre-test. The assessment tasks (looking at reasoning, digits forward, spatial working memory, and paired associate learning) have good literature support and could be argued to be sensitive to cognitive change. The training tasks are described in the article, but there is no source or support information given for their construction or choice.
Minimum dose of training was set at 10 minutes a day, 3 days a week. There were two training groups and a control group, which answered a series of obscure questions. Not surprisingly, the experimental groups improved on the trained tasks. However, all groups improved at the same rate on post-test assessment.

Well, with over 11 thousand subjects even miniscule differences would reach significance. So the study did not report any significance testing and concentrated instead on effect sizes (leaving me fairly bereft, to tell the truth). But the bottom line was quite clear - brain training did not make a difference.

For scientific critique of this study you can go here. But I personally believe in the results. BBC producers run a lot of participants through brain training tasks similar to those commercially available and found no effects of training.

Dr Menry Mahncke has been quoted here as saying that using this study to discount all brain training products would be "like concluding that there are no compounds to fight bacteria becasue the compound you tested was sugar and not penicillin.'

So, is there penicillin out there? Here is another study:

Ball, K., Edwards, J.D., Ross, L.A., McGwin, G. Jr (2010). Cogntive training decreases motor vehicle collision involvement of older drivers. Journal of the American Geriatric Society, 58(11), 2107-2113. Free full text to be found here. Worth a careful read.

This was also a big study, of 908 older participants (over 65, no significant problems) divided into four groups. The control group received no intervention. The remaining groups received memory training, reasoning training or speed of processing training. Some or all of this training, I believe, consisted of tasks produced by PositScience. Memory training involved teaching mnemonic strategies. Reasoning training consisted of teaching strategies and comprehension of patterns in everyday life. The 'speed of processing' training was really computerized training in visual attention skills. Training involved a maximum of 10 session (average of 9, range of 0 to 10).
The outcome measure was the number of at-fault motor vehicle accidents. This was adjusted for driving exposure measures calculated in a complicated way to obtain person-miles of travel (using self-reported annual mileage during the follow up - not an ideal measure, but better than nothing). Data about accidents was taken from state-kept statistics.
The outcome: training in both reasoning and speed of processing reduced at-fault accidents. Reasoning training was significant only when the results were adjusted for age at baseline, gender, race, education, location, visual acuity, health, depression and mental status. Training in speed of processing was significant with and without such adjustments.

Now, here is some serious generalisation of brain training to everyday life outcomes.

In my mind the main differences between the studies are twofold: the choice of subjects and the choice of tasks. Somehow it is easier for me to believe that effects of brain training will be observable in the aged brain, possibly in the context of a mild cognitive impairment, than in healthy adults. Also, the tasks in the Nature study were designed by BBC producers. The tasks in the driving study came out of research by Professor Merzenich, a father of brain plasticity research.

I can't comment much on the reasoning tasks, but I tried the 'speed of processing' training that was given to one of the study groups. It was a couple of years ago or more, and PositScience was selling their products as a one-off purchase for ugly sums of money (these days it is a neat monthly subscription of $14). I  bought a driving module which was the cheapest. And here I leave this story on a clifhanger, because I want to resurrect my old PC and get the data from this training before continuing.


Saturday, November 2, 2013

Young and shaky

The next post in the brain training series is being prepared. In the meantime just a quick note on an interesting blog of a man with early onset Parkinson's Disease who experienced Deep Brain Stimulation. You can see it on


Friday, October 25, 2013

Brain Training - The Wild West of Neuropsychology

Brain training is something neuropsychology needs to keep an eye on. There was supposed to be a brain training debate at the last conference, with FOR and AGAINST teams fighting it out in public. Because I have made noises on the topic before, I was invited to join the FOR team, and started beefing up my arguments and research in earnest. As those who went to the conference know, there was no debate. The reason? The organizers could not find enough people to stage it. Surprisingly, we could have run the FOR team, it was the AGAINST team that was missing.

I think that this is symptomatic of how the majority of neuropsychologists think about the topic: we dismiss it out of hand as pure snake oil. But it is dismissed so quickly that it is rarely investigated. So nobody was prepared to talk with authority against brain training.

I think this is both an intellectual and a strategic mistake. An intellectual mistake because we give up on the issue without knowing enough to make an informed decision. Strategic mistake because we are writing ourselves out of the biggest development of contemporary neuroscience to the detriment of the profession and, more importantly, to the detriment of our clients.

So, I am hoping to start the discussion here. I will mostly present the FOR position, and please feel free to discuss/disagree/flame me in the comments. I am planning a few posts on this, and hope to invite a couple of people to comment on the issue.
One disclaimer: I have very recently completed the Cogmed Coach course. Having read the literature and being graced with low working memory and high curiosity, I have been planning to do it for some time - and the debate speeded things up. I do not currently earn any money on this, but may do so in the future.


So, Question 1: Can intervention improve brain function?

I have a sneaking feeling that the underlying opinion in our profession is 'no, it cannot'. Or at least it was during the 2012 conference - the outlook seemed more positive this year. A lot of us believe that there is not that much that rehabilitation can do above and beyond the normal process of recovery. Not that we are doing anything to complain about our Speech Pathology colleagues offering 'ineffective' treatments, which they would be if we really believed this philosophy. However, we distrust intervention enough not to do much in the direction of cognitive rehabilitation ourselves. So what does the literature say? Here is a meta-analysis and a review.

1. Rohling, M.L., Faust, M.E., Beverly, B., Demakis, G. (2009). Effectiveness of cognitive rehabilitation following acquired brain injury: A meta-analysis re-examination of Cicerone et al.'s (2000, 2005) systematic reviews. Neuropsychology, 23(1), 20-39. Full text here.

This was a meta-analysis of 115 studies in cognitive rehabilitation. These included various treatments and techniques and did not focus on computer-based rehabilitation. Here are some of the interesting findings:

1. There was a small but statistically significant effect of rehabilitation on cognitive function. They concluded that there was a scientific base to support the assumption that cognitive rehabilitation is an effective treatment for persons with acquired brain injury.

2. Treatments for attention, visuospatial and language deficits produced significant improvements. Memory treatments and comprehensive treatments (training 'everything') failed to produce improvement.
I wonder if the lack of improvement in memory is what underlies neuropsychologists' skepticism about cognitive rehabilitation?

3. Attention training produced improvement in attention only (apart from the TBI group, in which it also improved global cognitive functioning). Visuo-spatial rehabilitation also improved performance on memory, language and comprehension measures.

My bottom line: treatments for attention, visuo-spatial function and language work, to lesser or greater degree. Don't bother rehabilitating memory - remediate instead. Generalized interventions don't work - focus on specific functions.

Cicerone, K.D. et al. (2011). Evidence-based cognitive rehabilitation: Updated review of the literature from 2003 through 2008. Archives of Physical Medicine and Rehabilitation, 92, 519-530. Abstract here, try the APS databases for full text.

This is a third in a series of review articles by the same group, with the first published in 2000. Apart from a review, it provides quite a few recommendations. Among others, these include:

1. Computer-based interventions may be considered as an adjunct to clinician-guided treatment for the remediation of attention deficits and cognitive-linguistic deficits after TBI or stroke. Sole reliance on repeated exposure and practice on computer-based tasks without some involvement and intervention by a therapist is not recommended. '

2. 'Computer-based interventions intended to produce extension of damaged visual fields may be considered for people with TBI or stroke'. But: 'The use of isolated microcomputer exercises to treat left neglect after stroke does not appear effective and is not recommended.' (This appears to be based on training of focus to left hemi-field. I am not sure whether the authors considered the developments in arousal training for left neglect)

3. For memory impairment computer-based remediation is not recommended. Instead, memory strategy training, use of compensation aids, error-less learning techniques and group-based interventions are recommended.

4. For executive difficulties, no computer-based interventions are recommended.

I'm sure that there are more reviews. But this has been a long enough post (apologies).

If you want to find out more, please read Norman Doidge's The Brain That Changes Itself. It is a great read. It is a neuroscience book that reached the status of a bestseller, and if nothing else, this should make us have a look. More importantly, it is one of the great examples of scientific reporting. It talks about stuff that we did not learn at school (or uni) and talks about it well.


PS: Please comment and let the games begin.

Thursday, October 10, 2013

Which smartphone to recommend to clients

There is no doubt in my mind that a smartphone is the best available memory aid. I estimate that I keep about 60% of all my day-to-day memory on my iPhone and tablet. I would argue that a person with a moderate memory problem but an excellent smartphone use can potentially outperform the majority of their peers.

So, which smartphone? I have just read an article about a top-end Android phone (the latest Galaxy, I think), that made me salivate and wonder about switching. But on reflection, I would still not change my recommendations for phones in memory remediation.

These are:

1. If a person is already using a smartphone, of whatever ilk, do not replace it with anything else. In this case, your job should be to just point out a few extra features that they should be using.
  • One of the most important of these, especially for those of your clients that tend to lose things, is an acquisition of a special keyring that raises hell if the keys and the phone get separated (e.g.: one gets left behind in a restaurant). Which reminds me, I really must get one of those.
  • Another thing to attend to, is to make sure that a phone's data is backed up. In particular, syncing calendars is important, so that if the smartphone gets lost or damaged, the appointments don't disappear with it. I tend to sync to Google calendar, which I find very convenient. You can add appointments on a computer, your smartphone or tablet, and they propagate across devices. Also, from a computer you can request multiple reminders, in the form of texts or emails that will arrive at your smartphone. However, Google Calendar is only one of a few systems worth considering.
  • The third matter to attend to, is to make sure that your client is using all the basic memory functions of the device: the notes app, keyword searching, camera for taking pictures of relevant info, the ability to add photographs to contacts, voice recording, asking people to text you information, etc. 

2. If a person is new to smartphones, I would strongly recommend iPhone. The reasons for this are:
  • The 'closed' architecture, or inability to mess around with operating system. All apps are pre-tested by Apple, and generally work without problems. The thing does not require technical know-how to manage and is fairly resistant to messing around with software. In other words, it is relatively hard to stuff up.
  • All models of the iPhone are basically the same, with the same suite of basic apps. I have recently damaged my iPhone and had to go back to the original one I had (imported from the US before the Australian release, version 0). It still worked, and still had all the basics that I use every day. This is important if you are giving advice, recommending apps, or generally helping the person learn how to use the phone as you don't have to know the functionality of multiple systems and models (although this argument is invalid if you are an Android expert). It also means that the client can buy an older version of the iPhone, which can be very much cheaper (especially second-hand), and still enjoy the core functionality.
  • The quality of the iPhone is generally good. In contrast, I wonder what is the quality of the low-end phones on the market.
So, for me the uniformity of architecture across models and the difficulty of 'breaking' the phone (software only, if you want a physically unbreakable phone, you'll have to go to Nokia), are the most significant features.

Of course, any Android fans, and those who prefer other brands of smartphones out there, are most welcome to explain their opinion in the comments or as a guest post.


Thursday, September 26, 2013

The profusion of Q's

This blog has had plenty of information on Q-interactive, which is the iPad software that provides WAIS-IV and WISC. But there is more than one Q out there. Pearson also has an internet-based test administration platform called Q-global. The prices are similarly structured, including per-use or per-year subscription.

There are a few tests on this platform that may be of interest to us.

The most interesting one from my perspective is Alloway Working Memory Assessment - an automated on-screen working memory battery for ages 5-79 There is also an older PC-based version of this test. Has anybody out there used it and could provide their impression in the comments? I'd love to hear about its validity and reliability.

Other interesting tests on Q-global include Delis-Rating of Executive Function and Woodcock Reading Mastery Test for those poor souls among us that have to include an assessment of academic ability in their testing (been there, done that, got the scars to prove it).

I was wondering if anybody out there had experience with Q-global and can give us a review.


Wednesday, September 25, 2013


I may have mentioned this iPad app before, but it is only recently that I started using it consistently in my practice. The app contains a variety of psychological tests and questionnaires. It has two versions - a free one and one costing $59.99. The difference is in the number and type of tests available within the app.

The free version has Depression Anxiety Stress Scales - both the 21 and 42 item version. I tend to use these on a regular basis to screen for and quantify any emotional problems. The scale is free, which makes it much more attractive than BDI and STAI-S, and is a nice tool for a screener. However, it tends to be a bit of a pain to score. Takes a bit of time, which I'd rather use for more enjoyable things. That's where the app comes in.

The electronic version is filled by the client on an iPad, and you receive an e-mail sending you to a secure website where the client's results wait all nice and scored. It all ends up being quicker and easier than scoring it by hand.

The app includes a number of scales, generally focusing on the clinical side of psychology, and is elegant and easy to use. I'd recommend it, even if you are going to just use the Depression Anxiety Stress Scales.



The hottest page on the internet - if you are an Australian neuropsychologist

This is an amazing, up-to-date collection of Australian norms.

The news just arrived in the CCN newsletter, and I'm shamelessly re-posting it here, because it is truly worth re-posting:

AP and AJP virtual issue on Australian Neuropsychological Normative Data has now been published and is available here: 
This is a great resource for us all. Thank you to Prof Simon Crowe who edited this issue.

Not only is it fantastic, but it looks like it is also free to use.



Tuesday, September 24, 2013

vibration-cancelling spoon for people with Parkinson's and tremors

A very worthwhile gadget. It uses accelerometer and a microprocessor to detect tremor, identify its type and adjust the end of the spoon so that the tremor is cancelled. There are several attachments planned, including fork and keyholder.

More information and a picture on:

the company website is here, with pictures, videos, etc.:

Now the bad news:
it is still in development, with the first spoons planned to ship  in December
it costs $295 US

Still, it would be pretty useful for somebody with severe tremor. I wonder if they will extend their technology to handwriting implements in the future.


Monday, September 23, 2013

Q-Interactive - what is included now and short review of DKEFS

I've just reviewed Q-interactive's offerings and here is a full list:

            Trail making test
            Verbal fluency test
            Verbal fluency test – alternate form
            Design fluency test
            Color-word interference test
            Animal Sorting
            Design Copying
            Memory for Designs
Fingertip Tapping
Word Generation
Memory for Designs, Delayed
Picture Puzzles
Children’s Memory Scale
            Dot Locations
Picture Locations
Dot Locations 2
           CVLT-II (Adults) – Standard and Alternate
           CVLT –C (children)
           CVLT II-Short

This is a nice little set of tests, and makes Q-interactive quite attractive. I  have to admit that I was looking forward to some verbal  CMS subtests, though. And NEPSY also seems to lack verbal memory tests, which is a pity. I hope these will be included in the next update.

The app itself is also getting updated in the next few days, just so that it can be made compatible with the new operating system available for the ipads. The nice things promised for the new version of the app are the ability to export results into Excel, which will display the results in a cleaner format. Also, the Central part will enable some cutting and pasting of results into reports (not that this is a good thing).

I am afraid that I have started to sound like a marketing arm of the Pearsons, so to balance it out, here is a promised review of the D-KEFS:

- generally, it is good, though you still have to use the forms for Trail Making and Visual Fluency, which adds to costs
-  Color-word interference (Stroop) was nice and screen-based
- I had serious difficulties with administering verbal fluency: with a quick client my stylus writing just wasn't up to the task (I find writing with a stylus a bit slower than with a pen - the iPad seems not to be able to cope with really fast, and possibly extra-scribbly writing. This has not been a problem, though, until I hit verbal fluency). I started writing the words on a piece of paper, and then could not put them in afterwards, so had to rely on manual scoring and norming. This made me wary of using electronic version of this test with high-functioning clients (or possibly with anybody).  There may be a non-intuitive way putting in the words after the fact but I have not noticed it at the time. If anybody has found it, or if it has appeared since I last looked at it, I'd appreciate a comment.
- the test returned results in the normal range when clinically a client seemed to have symptoms of executive dysfunction. I don't use D-KEFS on a regular basis, but have been told that its norms are rather 'permissive' in this way. However, I have not looked at it properly, so don't rely on my impression. Considering that the electronic and paper version are using the same normative data, this is more about liking or disliking D-KEFS rather than the electronic version of it.

I will try to have a look at NEPSY and CMS before the end of September and report my impressions.

Thursday, August 29, 2013

Conference preparation for a fashion-conscious neuropsychologist

Neuropsychology T-shirts:

This site has t-shirts, baseball hats, brain totes, and a brain tie:

Some more neuropsychology clothes:

This site has some nice brain artwork on their clothes:

Brain earrings here:
and here:
and here:

For those who like knitting, here is a brain handbag:
And another, highly realistic brain handbag:

uhm, I think I should return to report writing,

Q-interactive news (WAIS on iPad)

I have heard that the Q-interactive is acquiring NEPSY and two subtests of the Children Memory Scale in its next update (which may be coming tomorrow). I don't know the full details (the official Pearson email did not find its way to my inbox, unfortunately), but the addition of CMS really makes me happy. It is one of these batteries that one needs from time to time, but cannot borrow from anywhere because they are rarer than hen's teeth.

Also, I've just checked the Pearson website, and there is an offer of 1-month's free trial on their website.

On the minus side, the cost calculations I posted before are not accurate any more, as yearly license is now pricier at $475. Sigh.



Thursday, August 8, 2013

For the record - cross posting

There are a few interesting geeky things floating around that I decided to cross-post, mainly so that I know where to find the information when and if I need it again:

a do-it-yourself IQ test on the iPad:
(cross-posting from Les Posen and Dougal from npinoz)
information to be found at:

The main problem with the app is that it is aimed at healthcare providers and patients, so tries to position itself above the typical 'women magazine' type of test. While it does seem to use some psychological background in test development, its confusion between WAIS and WISC is suggesting that the authors are not particularly good.
A concerning development.

(cross-posting from Skye McDonald on onpinoz)
A website with information about studies of treatments for psychological and neuropsychological disorders. Looks positively fantastic. A must for evidence-based practice.

SOS Mobile Watch
(cross-posting from Katie Kirby)
GPS tracking, alarm function and the ability for carer to call the person wearing the watch, even without the wearer having to press a button. Single press of the button can call family or monitoring service. Optus network.

Article on Neuropsychology in Medical Observer
(cross-posting from Less Posen)
Always nice to know that medicos are reading about us.

Is that a psychologist in your pocket? The use of smartphone apps and web based applications in psychology
(from APS mailout)
Presented by Dr Michael Carr-Gregg
I suspect that it won't have much for neuropsychologists, but for those of us who do treatment, it may be useful.
Friday, 20 September

On-line Resources
(cross-postings from Katie Kirby - these are fantastic - thank you Katie and Gloria)

Fitness to drive:
  • Ballarat Health Services Fitness to Drive seminar March 2012: link to presenters pdf + link to presentations: After clicking on Assessing Fitness to Drive, then click on the bolded segments to bring up the presentations

Memory website from a cognitive psychologist (some useful information in plain language):
Neurovascular Tutorial website (useful refresher):
Dorothy Bishop blog:
Scroll to bottom of webpage for link to blog.

A video and brief thoughts on the power of music by Oliver Sacks:

Squallor and Hoarding:
Squalor & Hoarding Toolkit:
Swinburne Uni Psychology Clinic:
Hoarding Related Research:
Vic Health Discussion Paper 2012:

Videos for carers of people with dementia:
A multidisciplinary team of researchers from The University of Queensland has developed a set of educational videos on communication and memory strategies for professional and home carers of people with dementia. The MESSAGE Communication Strategies in Dementia and the RECAPS Memory Strategies in Dementia videos can be accessed in full and free of charge at

Brain education website:
From Canadian Institutes of Health Research:

APA Guidelines for evaluation of dementia (article)

Rehabilitation literature databases:
Center for International Rehabilitation Research Information & Exchange:

National Rehabilitation Information Center:


Wednesday, July 24, 2013

Aphasia simulator and other goodies

Katie Kirby recently sent out an information about aphasia website:

and I've had a look at it this morning while trying to avoid writing reports. 

It is a nice resource for patient families. In particular, I was impressed with aphasia simulator, which covers several types of aphasic difficulties and illustrates them in the areas of listening, speaking, reading and writing. It is a lovely demonstration of how it feels to have aphasia, and a brilliant tool for feedback sessions. Worth having a look.


Wednesday, July 10, 2013

Update on WAIS on iPad

In my last post I have suggested that you have a very good look at Pearson's privacy policy. This was due to concerns that I had with some of the wording and also the fact that it looked like the policy can be changed without alerting the users.

I did not accept the policy and raised my concerns with Pearson directly, and was very, very impressed by their reaction. The wording I did not like has now been changed and the users will be alerted to any changes through a notice on the website that they will have to acknowledge before proceeding.

Well done, Pearson!


Wednesday, July 3, 2013

The iPad WAIS is live!

The Q-interactive app is now available for purchase in Australia. A new version has come out with some improvements. The most obvious of these is the download speed. Now it takes seconds rather than hours to download the app. Yay!

According to Pearson information the new version contained the following improvements:
  • I forgot my username and I forgot my password – self-service for resetting on Central
  • Client information screen reduced to 6 fields only;
  • Directional assistance on the ‘Client’ iPad i.e. a line will now indicate which edge should be closest to the examinee.
  • Network and download optimisation
  • Database enhancements

One word of warning, though. When you first open the website that you need to use with the app,, you will find Terms and Policies page. Have a very good look at the privacy policy before using the product. There are some statements there that you need to think about.


Monday, June 3, 2013

So, should we move to WAIS on the iPad?

So, will I use WAIS-IV on the iPad after the trial is over?

This is  the bottom line, isn't it?

I found it to be a bit more fun assessing on the iPad - not unimportant for those of us who are close to burn-out because of constant testing. The geek appeal of using the iPads was somewhat moderatedby my feeling that an iPad app should be sleeker than what was offered.

Also, the fact that I don't have to add all the numbers within the subtests (twice, to make sure I do not make mistakes - just because I am paranoid), was a very nice feature. Again, the joy of this was moderated by the fact that getting to raw scores after the administration to put them into scoring software is a bit of a pain. The iPad provides a decent selection of results (index scores, scaled scores, significance of differences between indexes including base rates), but I need to compare  WAIS to  WMS, and that cannot be done on the iPad.

Another issue is that somewhere at the back of my mind there is a worry that electronic administration is less reliable than paper one, and that a catastrophe may happen. I am happy to report, however, that so far the app seems very solid. 

For me, another issue to deal with is that I perform a significant number of forensic assessments in jails, which do not allow electronic equipment.

The financials, I suspect, will be the most important issue in my adoption of iPad administration.

The price of two iPads is much lower than the price of even one of the test batteries that can be accessed through the app. Two iPads with minimum necessary configuration can be bought for considerably less than a thousand dollars, while a WAIS-IV currently costs $2,720, and WISC-IV costs $2,784. Both can be used through app, which also has California Verbal Learning Test and some subtests of the DKEFS. This price difference will become even more pronounced once WMS-IV is put on the iPad, which seems to be a plan for the near future.

On the other hand, there is the yearly Q-interactive subscription cost, which I believe to be about $300, and extra cost of administering the tests. According to my calculations, the cost of standard set of WAIS-IV forms for one client is currently $19.52 (calculated without Cancellation booklet, but including the price of delivery) . The cost of iPad administration of 10 WAIS-IV subtests and the cost of Coding booklet is $29.70: a diference of $10.18 per client. Interestingly, while the cost of each WAIS-IV subtest is $2.20, the cost of a WISC subtest is $1.50, so those specialising in children may have a much better financial situation.

Therefore, on-going costs are higher, but up-front costs are considerably lower. Whether it is  good or bad will depend on how many assessments one does in a week. I suspect that for any person starting a private practice in neuropsychology, or running a small private practice part-time this may be a very worthwhile deal. For those who already have the necessary test batteries, it probably won't be financially worthwhile to switch. This may be a different story, however, when new versions of the big batteries appear on the market.

So, will I move onto the iPad?

I probably will not.

I already have the batteries, and each administration on the iPad will cost me more than using paper forms. If the access to raw data was handy, I may consider increased price to be worth the time I spend scoring the test. If the app was a touch sleeker, I may not be able to resist using it for the fun value.
However, the app is still changing.

I will wait for the next version of t!


PS: This was written a fortnight ago, while I was off work being busy with other things for a month. Today was my first day back seeing clients, and I've done 3 paper-based WAIS-IVs. I missed doing it on the iPad! I may have to mix and match for my own amusement.  Also, a new version of the app has come out last Friday and I need to download it and mess around with it a little. I'll tell all in the next installment.

Sunday, May 19, 2013

Postcard from WAIS-IV Bootcamp

I feel like I have been at WAIS-IV bootcamp for the past month, or even better working daily with a personal test administration trainer!  How you might ask?  Well, I was also lucky enough to be included in the Pearson’s beta testing trials of the WAIS-IV on iPad.  So in the interests of transparency I should let you know that I got to use it for free, and get to keep the iPads...

So, what’s it like?  It’s wonderful if you like to administer the WAIS-IV exactly as it should be done, it doesn’t allow you to make administration errors.  Forget to give the prompt in visual puzzles that time is almost up?  No trouble, the iPad reminds you.  Hate to do the reversal items when clients make an error on the first two items of matrix reasoning? No trouble, it automatically goes back to the correct item.  The best news, the clients don’t know they’ve had to reverse, because the items come up automatically on the iPad screen, they haven’t seen you turning the book backwards.

Do you hate writing answers out verbatim? You can just touch on the box that corresponds to that answer.  If, like me you are a bit pedantic, and like to have a record of exactly what was said, you can write it down (on the screen – saves you scanning it in later), and you can check that you got it right with the audio recording of the responses.

Do you ever make scoring errors? Of course not deliberately, but you’d be surprised how many people do make – this overcomes that.  If you can score the responses on the go, it can give you subtest and index scores – and even work out differences amongst indexes.  All on the iPad, surreptitiously without the client even knowing.  Of course to do fancy analyses and to compare to the WMS-IV & ACS you still need to input the data into the scoring system, but it gives you lots of information as you go along.

Yes there are a few tweaks I’ve suggested, but don’t panic -  I don’t think computers will replace us yet.  The main difference from the client’s point of view is that they are looking at a screen, rather than a book.  All of my patients, of all ages, have really enjoyed the experience.  They can touch the screen to indicate their answers, which is also another feature that makes the examiner’s life easier.

Looks like we’ve entered a brave new world ...

Debbie Anderson

Thursday, May 9, 2013

A new app for people with aphasia

Faye Simpson has sent some information about a new app for people with aphasia. It looks very good. To find more information, go to

Thank you very much, Faye!



Tuesday, April 23, 2013

WAIS and WISC on the iPad - the good and the bad

OK, I have now had a week and a half of using  WAIS and WISC on the iPad. I have played with it a lot and I have now assessed four people with it. My impressions are rather mixed. I am giving Pearson lots of feedback (poor people), so it is quite possible that some of my pet peeves will disappear in the next version. Here it is: the good, the bad and the ugly:

The good:

1. The app is a solid translation of the test into the electronic medium. Things work, the subtests are the same, and the test is an exact copy of the original.

2. The administration instructions on the screen allow for a nice, uniform administration.

3. There are also some additional instructions about the administration and scoring that are easily accessed by tapping an icon.

4. The software scores each subtest and provides the scores immediately after each subtest. It also provides all the scaled scores, index scores and information about differences between indexes, including base rates, after the completion of the test. Nice.

5. If you score as you go, the test lets you know when to discontinue. It also gives you the option of testing limits if you want - very nice.

5. On verbal subtests one can choose one of the 'example' answers by pressing it, which frequently saves you from writing full answers down. There is also an option of writing things down in their entirety.

6. It seems somewhat less onerous to administer the test on the iPad, and one does not have to double-check the scoring.

The bad:
While the program is a competent version of the test, it is not a good app. The sleekness that one expects from an iPad app is lacking. It would probably not bug me if this was a PC application, as one does not expect it to be especially user-friendly. However, the same issues bother me greatly on an iPad. Here is a list of some things that bug me:

1. The app is not easy to start using. Do not count on being able to administer the WAIS without going through at least a couple of dummy clients - some practice is definitively needed.

2. There is no on-screen help for the features of the program, and it would be really useful at times.

3. The writing on the screen is very small and hard to read - granted, the writing is the same size as it would be in the manual. However, it could have been made much bigger, especially considering that there is often one line of 10-point text at the top of the screen, while the rest of the screen is empty.

4. Scoring is fully non-automatic, even in places where you would expect it to be. For example, you can choose a 2-point answer on Vocabulary, but that does not automatically give the clients a 2-point score. Apparently, automatic scoring was in the original and was criticized by users in the USA, hence no automatic scoring. Judging from Word Reasoning on WISC, where the scoring, in contrast to all other subtests, seems to happen entirely in the background and without the clinician's input, this was not that good either. Some automaticity in scoring that can be adjusted by the clinician would be best.

5. The timer is built-in, and can be used to count up or down. However, there is no option of having a chime at the end of the count-down. I found myself using my own timer on Symbol Search and Coding because of that.

6. The exported results are formatted in a quite painful way and there is no obvious way of getting Block Design with no time bonus score or Reliable Digit Span.

7. The logging-in problem is still an issue: while Pearson managed to log me onto the two iPads they provided, my attempts to log on using another iPad are still unsuccessful. I understand that the developer is looking into this.

8. One cannot delete a client file once it is created. This is a potential confidentiality issue, e.g.: at times when the client did not attend. I don't like client details floating in places they don't need to be. I have discussed it with a Pearson representative and she said that they will have a look into the issue. In the meantime, I found that I can edit the client details completely, leaving only the client number unchanged, so there is a way of dealing with it.

I'll be back with more information of CVLT and D-KEFS

Till next time,

Sunday, April 14, 2013

Christmas somewhat postponed

I have received the iPads about two weeks ago, but could not start playing with the testing software until last Thursday.

Something went very iffy in my iPads, and the Q-interactive program refused to open: every time it logged me in and then immediately went to update itself, invariably crashing during the update.

Pearson were terrific in their client support, but nothing they tried worked. In the end, they took the iPads away, and it took them a few days to log in. They still didn't seem to know what went wrong in the first place.

This glitch is a concern, but it seems to have just hexed my iPads. I know that another person who is involved in the trial has been able to open and work the testing software.

In any case, if you have similar problems in using the software, don't assume you are doing anything wrong and get Pearson to work it out. Hopefully, by the end of the trial they will have the problem solved.

In the meantime, I'm madly playing with the software and will send updates shortly.



Christmas has come

I have had some lovely things arrive by a courier the other day. Pearson is releasing their iPad based WAIS and WISC and I am one of the people they chose to take part in the Australian trials. So I received two iPads, with accessories, and free access to Q-interactive for a month. My job is to  to answer some questions about my experience every week. In the spirit of full disclosure, I need to tell you that I get to keep the iPads and accessories after the end of the trial and my registration to Q-interactive is free till September. I get to pay for assessments after the end of the trial. So yes, I am very happy with all of this, and rather positively disposed towards the trial.

Not that I would not anyway, being rather geeky and enjoying all new gizmos.

I have checked with Pearson, and I am allowed to blog about the trial. The only thing that they ask is that I call their consultant first if I encounter any problems. That sounds very reasonable.

For the first instalment, I am going to summarise my pre-trial opinion about Q-interactive. This is pretty much what I said during last years conference.

1. The iPad version of the tests is essentially the same as the paper version - Pearson are using the same norms, so they could not change much. There are some nice-haves on the iPad, but no major changes. There are some equivalence studies, the info about which is on the Internet. So the bottom line is that you can please yourself in terms of whether you use paper or iPad version.

2. The main difference is going to be price and cash flow. The administration on the iPad does not require an upfront expense of buying the test battery, and with both WISC, WAIS, DKEFS (part, I believe) and CVLT, this is significant. On the other hand, there is the yearly subscription, cost of 2 iPads (and iPad upgrades which are likely to be more frequent than every 10 years). Still, without a doubt, there upfront price is much less. However, there will be a cost per client of administering the tests. It'll need careful arithmetic to calculate the comparative costs.



Saturday, March 16, 2013

If you really, really want to be good

I know the Mini Mental State Examination by heart. No need for forms.


I think I am no longer able to administer it without breaking copyright. Even though the MMSE and various studies about cutoffs and scores were published in an open-access journal. Wikipedia, the source of all wisdom, says that the authors gave Psychological Assessment Resources an exclusive rights to publish, licence and manage all intellectual property rights for MMSE in all countries of the world. Apparently, this was legal at the time, but the legal loophole is now closed, so we don't need to panic about our other tests.

For all I know the whole copyright issue does not apply to Australia, and I'd really love if somebody could comment on that.

However, being a good little Neuropsychologist and needing to use MMSE recently for a client unable to manage anything more complex, I have decided to do the right thing and pay for the privilege.

For those of you who are as paranoid as I am about copyright, the process is as follows:

1. register as a PAR user if you are not registered yet. Preferably note down your password, unlike yours truly who could not remember what it was and had to re-set it.

2. search the app store for MMSE (not Mini-Mental, not Examination, not PAR - nothing but MMSE works). Download the free app.

3. log in and pay for some administrations. This is an in-app purchase - it is relatively painless to pay. Make sure you claim it off tax.

5 administrations = $7.49
20 administrations = $35.99
50 administrations = $69.99
100 administrations = $134.99

4. Administer MMSE

Some issues here:
- there is no 'don't know' or 'couldn't do for love or money so I discontinued' options on recording serial sevens - 5 numbers are required
- although they say you can use the 'world backwards' as a second option, I did not see a way of doing so. I had a relatively quick look, so it may still be hiding there somewhere - I don't feel like paying for another administration to have a look
- there is a nice option of taking a picture of the client's writing or drawing so all your records are within the program

5. Get the scores for each question, the total raw score and the T score at the end of the test. The lovely thing here is that there are proper norms for age and education.

You can then e-mail the document to yourself and, if you want a paper record, print it.

6. Go to PAR toolbox or your paper conversion tables to translate the T score to a percentile.

7. Lie down for a rest - you've earned it!


Tuesday, March 5, 2013

Let the computer do the testing

Hello after a long blog break. My private practice has became nicely popular, which leaves very little time for everything else. Which is of course very good, although it doesn't leave much time for fun.

Anyway, while I was diligently working, there has been progress in the matter of computerised testing.

The iPad version of WAIS-IV, WISC-IV, CVLT and D-KEFS is getting very much closer, with Australian trials about to start. More about this a bit later.

Schuhfried Australia, another provider of computer-based neuropsychological tools, are working on translating their multiple references into English and have new manuals (currently available on request from John Ferguson, Schuhfried representative for Australia -, soon to be on their updated website).

There is also a small number of testing software available for a loan to clinicians for month of two to gain feedback on clinicians' experience and for research purposes. Schuhfried is also interested in getting universities to do some research on their software.

This is an good development: the Schuhfried software appears very convenient to use and much superior to our current instruments in areas such attention, information processing, or cognitive assessment related to fitness to drive. At least as much as one can judge by having a look at the software and its description. It appears that soon we will also be able to read some research on this.


Saturday, January 12, 2013

Oliver Sacks - the website

For those who adore Oliver Sacks, you can find quite a few bits and pieces on his website:

This includes some interesting video clips.


Depression tool

Welcome back after the holidays.

With thanks to Les Posen, I can recommend a nice, well devised tool for assessing depression that can be found on

It is The Center for Epidemiologic Studies Depression Scale (CESD), which was created in 1977 by Laurie Radloff, 1 and revised in 2004 by William Eaton

The site has information about the scale, about the scores and about depression (short, but with links). It also has an option for e-mailing the treating professional one's results.  As it asks about symptoms within the last week, it strikes me as a nifty tool for monitoring severity of depression over time.