"Generating descriptive text from functional brain images"
"Using Wikipedia to learn semantic feature representations of concrete concepts in neuroimaging experiments"
All of these links pertain to the model analyzed in detail in the first paper (model with 40 topics and alpha=25/#topics, the model with the fewest topics where we can achieve asymptotic performance).
For more details please contact francisco.pereira_gmail.com.
Text output from brain images, for every concept
For each concept, the table displays the top 20 words deemed most probable from its brain image (the 10 which are in the corresponding wikipedia article and the 10 which are not, as in Figure 3):
The brain image for a concept is held out as a test set together with each one of the other 59 concepts; there are hence 59 slightly different sets of topic probabilities predicted for that concept. The table was produced using the average of those 59.
Word probabilities predicted from brain images, for every concept pair
For each pair of concepts, these are the word probabilities given by the distribution derived from the example brain images for those concepts, when they were in the test set (as shown for "apartment" and "hammer" in Figure 3):
Mapping from topics to concepts that use them
This page matches each topic with the concepts (and wikipedia articles) that have it with highest probability. It can be used to get to the basis images that correspond to each topic, for the two subjects with highest accuracy (P1 and P4), which are in the next two pages.