Skip to main content

[]

Intended for healthcare professionals
Skip to main content

Abstract

Some types of data graphs are more easily understood than others. Following the suggestion that typically encountered graphs may activate individuals’ cognitive schema quickly, this study investigated prior exposure to and typicality of three common graph types: vertical bar graphs, horizontal bar graphs and line graphs; and three common data patterns: rising, neutral and falling. Results from two samples (N = 57 and N = 30) suggest that vertical bar graphs are encountered more frequently, are rated as more typical and are identified more quickly than horizontal bar graphs and line graphs; also that rising data patterns are more typical than falling and neutral data patterns. The findings contribute new knowledge about the hierarchical structure of graph schema and can inform design choices about which graph types might best facilitate viewers’ understanding of data visualizations.

Introduction

Data graphs facilitate analysis and communication of quantitative information. The perhaps most common types of data graphs, line and bar graphs, were first published by William Playfair in 1786. Impressively, Playfair’s choice of graph design has pervaded. By 1913, line and bar graphs were being used to communicate with a large public. For example, campaigns from the US Health Department promoting falling death rates were displayed widely in street parades and on carriages (Brinton, 1914). Over time, the utility of line and bar graphs has been well supported by a body of experimental research (Spence, 2006) and integrated into the education curriculums used throughout the world to develop children’s ability to read and construct data visualizations (Börner et al., 2019; Franconeri et al., 2021). Even in the modern era where data visualizations are presented in digital form (Friendly, 2008), line and bar graphs are a prominent part of current guidelines for constructing effective data visualization (Franconeri et al., 2021).
Although the established conventions for data visualization suggest using bar graphs to display nominal category values and line graphs to display metric scale values, different types of graphs promote different kinds of interpretations (Zacks and Tversky, 1999). For example, bar graphs more effectively communicate contrast between conditions, for instance when communicating differences in a person’s mood across two times of day, even though the scales are metric (Franconeri et al., 2021). Thus, choice of graph type should also consider how well the intended message may be comprehended by the viewer.

Graph schema

Several cognitive models of graph comprehension (e.g. Lohse, 1993; Padilla et al., 2018; Pinker, 1990) suggest that people have a graph schema stored in their long-term memory. The basic idea is that viewers’ comprehension of a given graph requires that the visually encoded stimulus is matched to a specific graph schema (Kosslyn, 1989). For example, vertical bar graphs, horizontal bar graphs and line graphs all belong to an ‘L-shaped’ graph schema characterized by horizontal and vertical axis defining a Cartesian coordinate system. In contrast, pie charts and doughnut graphs belong to an ‘O-shaped’ graph schema characterized by a circular space defined by polar coordinates (angle and distance from centre). Quick and efficient mapping of visual stimuli to the correct graph schema facilitates comprehension. Working from the assumption that differences in reaction time (RT) reflect differences in recognition and cognitive processing of graphs, Ratwani and Trafton (2008) found that individuals extracted information from graphs more quickly when a schema consistent with the current stimulus was activated through prior exposure (i.e. during blocks of non-switch trials, bar graph followed by bar graph) than when a schema inconsistent with the current stimulus was activated (i.e. during blocks of switch trials, pie chart followed by bar graph). The results suggest that individuals do indeed make use of specific graph schemas when interpreting a given data graph.
Pinker (1990) proposed that graph schemas are part of a hierarchical structure – similar to how the cognitive representations of other semantic categories are organized (Rosch, 1975; Rosch et al., 1976). As shown in Figure 1, general schemas are mapped to collections of more specific schemas or visual descriptions. For example, the general schema for bird is activated as the visual descriptions of particular types of birds (e.g. robin, ostrich) are perceived and encoded. Ideally, the general schema helps organize the visual information in ways that support comprehension. Similarly, a general schema that supports comprehension of information embedded in data graphs is activated when the visual descriptions of specific graphs (e.g. vertical bar graphs, horizontal bar graphs, line graphs) are perceived and encoded.
Figure 1. Applying typicality research to data graphs.
Note: The degree of typicality (presumed in the case of data graphs) is encoded by line width.

Graph typicality

Notably, the general schema may be activated more quickly by specific instances that are especially typical or common – a phenomenon known as the typicality effect (Rosch, 1975). For example, as indicated by the bold line on the left side of Figure 1, a robin may be perceived as a more typical case of the bird schema than an ostrich. When true, visual descriptions of robins will more quickly activate a general bird schema that would support comprehension of the stimuli (Rosch et al., 1976). By analogy, as indicated by the bold line on the right side of Figure 1, vertical bar graphs may activate the data graphs schema more quickly than horizontal bar graphs or line graphs. When true, the typicality effect would suggest that vertical bar graphs may be especially useful for communicating quantitative information.
Two kinds of models have been proposed to account for typicality effects. In prototype models (Rosch and Mervis, 1975), typicality is determined by the extent to which a stimulus has the specific features that define the prototype. In exemplar models (Storms et al., 2000), typicality is determined by the degree of similarity of a stimulus to one or more of all the exemplars stored in memory. In both models, frequency of exposure to specific instances plays a critical role for the development of cognitive representations – either by supporting formulation of the feature set that defines the prototype or by expanding the quantity or quality of exemplars stored in memory. This suggests that, whether using prototype or exemplar models, graphs that are encountered more often should become more typical, and in turn be interpreted more quickly. Indeed results from two studies indicate that individuals can pick out specific values or the higher of two values more quickly when viewing vertical bar graphs than when viewing horizontal bar graphs (Fischer et al., 2005) or line graphs (Ratwani and Trafton, 2008). The implication is that vertical bar graphs may be seen more often and thus be deemed more typical than horizontal bar graphs or line graphs.

The present study

In this study, we investigate if and how typicality effects manifest across a variety of data graphs. Specifically, we examine typicality of three types of L-shape graphs that are both ubiquitous (Friendly and Denis, 2005) and serve as key examples in graph comprehension models (Kosslyn, 1989; Pinker, 1990; Ratwani and Trafton, 2008): vertical bar graphs, horizontal bar graphs and line graphs; and three types of data patterns that are commonly depicted in these graphs: rising, neutral and falling patterns. Our results inform both scientific and practical knowledge about data graphs. First, our results provide new knowledge about the structure of the graph schema, specifically whether they may be organized in a hierarchical structure (Pinker, 1990; Ratwani and Trafton, 2008). Second, the results will inform decisions about which graph types might best facilitate viewers’ understanding as well as the design of icons in human–machine interactions (where fast graph recognition is crucial). Third, in the long run, knowledge of typicality effects will help designers create more memorable visualizations, since unique stimuli seem to be easier to remember (Borkin et al., 2013).
Drawing on prior work, we study typicality effects using two methods (Rosch et al., 1976). In Study 1, we followed the traditional approach where participants are asked to provide subjective ratings of the typicality of specific instances of the general category (e.g. pictures of different types of birds). Here, ratings indicate relative typicality of the different graph types and data patterns. In parallel, we obtained participants’ perceptions of the relative frequency of their prior exposure to the different graph types and data patterns. In Study 2, we used a verification time task approach wherein participants are asked to judge whether or not specific instances belong to the general category. Here, faster reaction times indicate greater typicality of specific graph variants. In line with prior work (Ratwani and Trafton, 2008), we expected that participants had encountered vertical bar graphs more frequently and that these graphs would be deemed, both subjectively and behaviourally, as the most typical of the data graphs studied here.

Study 1

Method

Participants

A priori calculations indicated that 28 participants would be needed to have a statistical power of .80 for detecting a medium effect size of f = .25 at α = .05 when using a within-subjects ANOVA with three factor levels (Faul et al., 2009). Daily checking of our online study platform facilitated collection of data from N = 57 German-speaking psychology students (37 women, 20 men, age M = 33.1 years, SD = 12.3) who participated in the study for course credit.

Material and procedure

The study was conducted online using Unipark (Questback GmbH, 2019). Each person was presented with 9 stimuli in a randomized order which consisted of 3 graph types (vertical bar graph, horizontal bar graph and line graph) and 3 data patterns (rising, neutral, falling). The nine data graphs (created using Excel) are depicted in Figure 2. Each data graph showed 6 data points. For the rising patterns, the first number was randomly chosen from 0 – 10 (6.26), the second from 10 – 20 (16.65), the third from 20 – 30 (26.31), the fourth from 30 – 40 (39.88), the fifth from 40 – 50 (47.49) and the sixth from 50 – 60 (50.66). For the falling patterns, we used the generated numbers from the rising patterns in a reversed order. For the neutral pattern, six numbers were randomly chosen from a range between from 25 – 35 (resulting in 33, 31.95, 32.94, 30.02, 31.80 and 33.76). We chose this range to avoid creating a rising or falling pattern. The axis showing the value of the data points was always labelled with tick-marks from 0 to 60 with 10-point intervals. The pixel size of each data graph was 379 × 379. First, all graphs were presented one by one in an individually randomized order and participants rated how typical the shown graph was for a data graph using a 1 (very untypical) to 6 (very typical) response scale. Second, participants used the mouse to sort the stimuli in descending order in accordance with their estimation of how often they had seen a similar graph in their life before. The most frequently encountered graph was to be placed at the top (first place was coded as 1) and the least frequently encountered graph was to be placed at the bottom (last place was coded as 9). All stimuli images and data are available at https://osf.io/7ZM2D
Figure 2. Data graph stimuli of the studies.
Note: Each combination of the three graph types and the three data patterns is shown.

Results

Figures 3 and 4 show the mean values for the typicality ratings and the frequency rankings.
Figures 3 and 4. Mean values for typicality ratings (left) and frequency rankings (right).
Note: For ratings of typicality (left) higher numerical values indicate higher typicality. For rankings of frequency (right) higher numerical values indicate a lower frequency.

Typicality ratings

A 3 × 3 ANOVA with the within-subject factors graph type (vertical bar graph, horizontal bar graph, line graph) and data pattern (rising, neutral, falling) showed a significant main effect for graph type, F(2, 112) = 20.92, p < .001, ηp2 = .27, and a significant main effect for data pattern, F(1.80, 100.66) = 9.18, p = .011, ηp2 = .14 (Greenhouse-Geisser corrected). No interaction effect was found, F(4, 224) = 0.81, p = .518, ηp2 = .01.
In regard to the graph type, higher typicality ratings were observed for vertical bar graphs (M = 4.68, SD = 1.15) and line graphs (M = 4.63, SD = 1.20), than for horizontal bar graphs (M = 3.81, SD = 1.41). The difference between vertical bar graphs and horizontal bar graphs was significant (t(56) = 5.26, p < .001), as well as the difference between line graphs and horizontal bar graphs (t(56) = 5.14, p < .001). The difference between vertical bar graphs and line graphs was not significant (t(56) = 0.42, p = .676).
In regard to the data pattern, higher typicality ratings were observed for the rising pattern graphs (M = 4.66, SD = 1.08), followed by the falling pattern (M = 4.41, SD = 1.26), followed by the neutral pattern (M = 4.06, SD = 1.27). The difference between rising patterns and falling patterns was significant (t(56) = 2.19, p = .033), as well as the difference between rising patterns and neutral patterns (t(56) = 3.96, p < .001), and the difference between falling patterns and neutral patterns (t(56) = 2.29, p = .026).

Frequency ranking

A 3 × 3 ANOVA with the within-subject factors graph type (vertical bar graph, horizontal bar graph, line graph) and data pattern (rising, neutral, falling) showed a significant main effect for graph type, F(1.63, 91.20) = 29.93, p < .001, ηp2 = .35 (Greenhouse-Geisser corrected), a significant main effect for data pattern, F(1.61, 89.99) = 11.101, p < .001, ηp2 = .16 (Greenhouse-Geisser corrected) and an interaction effect, F(4, 224) = 8.16, p < .001, ηp2 = .13.
In regard to the graph type, the mean frequency rank for vertical bar graphs was 3.54 (SD = 1.17), for horizontal bar graphs 6.06 (SD = 1.66), and for line graphs 5.14 (SD = 1.83). The difference between vertical bar graphs and horizontal bar graphs was significant (t(56) = −9.88, p < .001), as well as the difference between vertical bar graphs and line graphs (t(56) = −4.90, p < .001), and the difference between horizontal bar graphs and line graphs (t(56) = 2.35, p = .022).
In regard to the data pattern, the mean frequency rank was 4.13 (SD = 1.27) for the rising pattern, 5.53 (SD = 1.79) for the neutral pattern, and 5.08 (SD = 1.35) for the falling pattern. The difference between rising patterns and neutral patterns was significant (t(56) = −4.15, p < .001), as well as the difference between rising patterns and falling patterns (t(56) = −4.36, p < .001). The difference between neutral patterns and falling patterns was not significant (t(56) = 1.33, p = .189).
The interaction effect reflects that, while line graphs were ranked as more frequently encountered than horizontal bar graphs for the rising and neutral data pattern, line graphs were ranked as less frequently encountered than horizontal bar graphs for the falling data pattern. Notably, vertical bar graphs with a rising data pattern were ranked as most frequently encountered.

Follow-up analysis

For each person, we calculated the correlation between all their typicality ratings and all their frequency rankings. The mean of the correlations was –.58. Fisher r-to-Z transformation and a one-sample t-test confirmed that the average correlation was significantly different from zero (t(50) = -9.06, p < .001). Hence, graphs ranked as more frequent were also rated as more typical.

Discussion

Despite using common (L-shaped) data graph formats that can be used interchangeably to display many variants of data, Study 1 documented substantial differences in rated typicality and consistent frequency ratings. Vertical bar graphs seem to dominate the representation. The correlation showed that typicality ratings were related to perceived frequency of exposure. Thus, the more often a specific kind of data graph has been observed in the past, the more typical it is perceived. Overall, perceived typicality was highest for vertical bar graphs, followed by line graphs followed by horizontal bar graphs. This order is largely in line with Ratwani and Trafton’s (2008) finding based on participants’ reaction times for reading off a value, where vertical bar graphs were read fastest, followed by horizontal bar graphs and then line graphs.
Typicality was also influenced by the data pattern depicted in the graph, albeit with a smaller effect size than the effect size of the graph type. Rising patterns were perceived as more typical than falling patterns. Neutral patterns seem to be less typical than rising or falling pattern. The finding of the high perceived frequency of rising trends is in line with previous findings from related fields. For example, research on function learning (Brehmer, 1971, 1974; Busemeyer et al., 1997: Kalish et al., 2004, 2007; McDaniel and Busemeyer, 2005) and graph perception (Ciccione et al., 2022) has shown that people have a bias for linear functions with a positive slope, that increasing linear functions are among the most easily learned functions, and that people extrapolate more easily from linear rising trends than from exponential ones.
In a second study, we used a verification time approach to check whether the conclusions obtained from self-report measures used in Study 1 might also manifest in behavioural measures.

Study 2

Method

Participants

In Study 2, 30 German speaking participants (17 women, 13 men, age M = 35.9 years, SD = 14.5) participated as volunteers for no extra reward (see Study 1 for power analysis).

Materials and procedure

Participants were tested individually in a laboratory on a laptop computer with a 12.5-inch screen. The program was controlled by psychopy (Peirce et al., 2019). We used the same design and stimuli as in Study 1, except with a few changes. The nine data graphs were supplemented with a set of ‘shredded’ stimuli that might not be considered data graphs. These stimuli were created by taking each graph of Figure 2 and manipulating the image using image effects in the freeware IrfanView. For vertical bar graphs and line graphs, a vertical shift effect with a strength of 50 was introduced. For the horizontal bar graphs, a horizontal shift effect with a strength of 50 was used. Figure 5 shows the nine ‘shredded’ stimuli. During the study, each of 9 data graphs was presented 5 times so that RT could be averaged across multiple presentations of the same stimulus. Specifically, the stimuli were presented in an individually randomized order of 90 trials (9 data graph stimuli × 5 presentation times, and 9 shredded stimuli × 5 presentation times). On each trial, participants were asked to indicate as quickly as possible by pressing a key whether the presented stimulus was a data graph (right arrow key) or a shredded variant of a graph (left arrow key).
Figure 5. Shredded stimuli used in the study.
Note: The origin of each stimulus is based in the same cell of Figure 2. For example, the stimulus in the upper left corner is a shredded vertical bar graph with a rising data pattern.
A fixation cross in the middle of the screen appeared before each stimulus for 1 second. At the beginning of the experiment, participants conducted a practice trial with six example stimuli. Three stimuli were data graphs (each data graph type with a neutral pattern) and three stimuli were shredded stimuli. The dependent variable, reaction time for correct recognition, was measured at each trial.

Results

Two criteria led to the exclusion of small amounts of reaction time data: wrong answers (0.011% of the data) and reaction times larger than 3000ms (0.001% of the data). For each of the nine data graphs, the mean value across the five (or less in case of excluded data) reaction times was calculated.
Figure 6 shows the mean reaction times for each data graph.
Figure 6. Mean reaction times for each data graph.
The 3 × 3 ANOVA showed a significant main effect for graph type, F(2, 58) = 4.92, p = .011, ηp2 = .15, and a significant main effect for data pattern, F(2, 58) = 3.61, p = .033, ηp2 = .11. No interaction effect was found, F(3.2, 92.7) = 1.43, p = .237, ηp2 = .05 (Greenhouse-Geisser corrected).
In regard to the graph type, the shortest reaction time was observed for vertical bar graphs (M = 720 ms, SD = 224 ms), followed by horizontal bar graphs (M = 757 ms, SD = 250 ms), followed by line graphs (M = 773 ms, SD = 255 ms). The difference between vertical bar graphs and horizontal bar graphs was significant (t(29) = −2.50, p = .020), as well as the difference between vertical bar graphs and line graphs (t(29) = −3.15, p = .004). The difference between horizontal bar graphs and line graphs was not significant (t(29) = −0.82, p = .418).
In regard to the data pattern, the fastest reaction time was observed for the rising pattern graphs (M = 729 ms, SD = 239 ms), followed by the neutral pattern (M = 746 ms, SD = 227 ms), followed by the falling pattern (M = 774 ms, SD = 261 ms). The difference between rising patterns and falling patterns was significant (t(29) = −2.42, p = .022). The difference between rising patterns and neutral patterns was not significant (t(29) = −1.00, p = .328), neither was the difference between falling patterns and neutral patterns (t(29) = 1.90, p = .067).
The mean RT for shredded stimuli was 778 ms (SD = 463 ms) and for data graphs was 776 ms (SD = 500 ms). The difference was not significant (t(1346) = −0.11, p = .910).

Discussion

The results from Study 2 showed that graph type had a significant influence on the verification time. In Study 1, the vertical bar graphs were rated most typical and ranked with most frequent exposure. In Study 2, the vertical bar graphs were recognized significantly faster than horizontal bar graphs and line graphs, with the same ordering of reaction times seen in a previous study by Ratwani and Trafton (2008). In that study, participants had to identify data values in the graphs. Thus, it was ambiguous as to whether the RT differences reflected differences in recognition of the graph type (cf. Pinker, 1990) or aspects of the data extraction process (or both). Our finding, using a much simpler and more process pure task where participants only had to decide whether or not the stimulus is a data graph, thus provides an important replication and clarification. We add the new finding that reaction times for recognition were also influenced by the data pattern, with rising pattern graphs being recognized faster than neutral and falling pattern graphs.

General Discussion

This study investigated the possibility of typicality effects in data graphs using both a traditional typicality ratings approach and a verification task approach. Overall, the results indicate that vertical bar graphs are a more typical graph type than horizontal bar graphs and line graphs. Following from the theoretical and empirical work of Rosch (Rosch, 1975; Rosch et al., 1976) and Pinker (1990), our results suggest that a vertical bar graph is a more typical L-shape graph than are horizontal bar graphs and line graphs.
We also found that data pattern seems to play a role in typicality of graphs. Our results suggest that rising data patterns are more typical than falling patterns, while neutral patterns are perceived as less typical. Pinker explicitly discussed different patterns of descending and ascending staircases in relation to a bar graph schema and Rosch et al. (1976) showed typicality effects with dot patterns. Our results provide additional evidence that data patterns are part of an individuals’ graph schema. One explanation for the lower typicality of the neutral patterns in our stimuli could be that they might occur infrequently, in part because designers often adjust the range of vertical axis so as to highlight the differences among the data points – in line with the design advice to set the data in the focus (e.g. Tufte, 2001).
One practical implication of the study concerns the choice of a graph type. If fast processing and understanding of a graph is crucial, it is perhaps better to use a vertical bar graph. However, in some cases, there are other reasons to choose a different graph. For example, it is often recommended to use horizontal bar graphs instead of vertical bar graphs when the labels are long (e.g. Wickham and Grolemund, 2016). Other criteria can be the class of data and the intended message (focus on contrast vs change over time, cf. Franconeri et al., 2021). A further practical implication of our findings concerns the data pattern. In cases where the order of the values is not determined by other criteria, it may be useful to sort the data to create rising patterns.
Some limitations in our study suggest perspectives for future work. First, the Study 1 sample consisted only of university psychology students. It is possible that they differ from people in the general public in their graph type experience or in other potentially influential factors such as media consumption. Second, our stimuli consisted of a well-controlled but small set of data graphs. This ensured that participants would not suffer fatigue and that practice effects would be minimal. Long sessions in Study 2 might have led the participants to forget that they were dealing with data graphs and instead focus on the discriminating features that with practice would have turned out to best support categorizing the stimuli (cf. Gaschler et al., 2015). Future studies can extend the work by allocating different stimuli to short sessions on different days in a counterbalanced order. This would allow for a broader sampling of stimulus properties, including strength of rising or falling patterns, number of data points depicted, orientation of axes and labels, colour of various visual features, inclusion of error bars, and many other types of graphs in order to obtain greater specificity about the many factors that influence typicality. We look forward to investigating typicality effects in data graphs and how it can be leveraged to increase and optimize viewers’ understanding and decision-making.

Funding

The authors received no financial support for the research, authorship and publication of this article, and there is no conflict of interest.

ORCID iD

References

Borkin MA, et al. (2013). What makes a visualization memorable? IEEE Transactions on Visualization and Computer Graphics 19(12): 2306–2315.
Börner K, Bueckle A, Ginda M (2019) Data visualization literacy: Definitions, conceptual frameworks, exercises, and assessments. Proceedings of the National Academy of Sciences, 116(6): 1857–1864.
Brehmer B (1971) Subjects’ ability to use functional rules. Psychonomic Science 24(6): 259–260.
Brehmer B (1974) Hypotheses about relations between scaled variables in the learning of probabilistic inference tasks. Organizational Behavior and Human Performance 11(1): 1–27.
Brinton WC (1914) Graphic Methods for Presenting Facts. New York, NY: The Engineering Magazine Company.
Busemeyer JR, et al. (1997) Learning functional relations based on experience with input–output pairs by humans and artificial neural networks. In: Lamberts K, Shanks DR (eds) Knowledge, Concepts, and Categories: Studies in Cognition. Cambridge, MA: MIT Press, 408–437.
Ciccione L, Sablé-Meyer M, Dehaene S (2022) Analyzing the misperception of exponential growth in graphs. Cognition 225: 105112.
Faul F, et al. (2009) Statistical power analyses using G*Power 3.1: Tests for correlation and regression analyses. Behavior Research Methods 41(4): 1149–1160.
Fischer MH, Dewulf N, Hill RL (2005) Designing bar graphs: Orientation matters. Applied Cognitive Psychology 19(7): 953–962.
Franconeri SL, et al. (2021) The science of visual data communication: What works. Psychological Science in the Public Interest 22(3): 110–161.
Friendly M (2008) The golden age of statistical graphics. Statistical Science 23(4).
Friendly M, Denis D (2005) The early origins and development of the scatterplot. Journal of the History of the Behavioral Sciences 41(2): 103–130.
Gaschler R, Marewski JN, Frensch PA (2015) Once and for all: How people change strategy to ignore irrelevant information in visual tasks. Quarterly Journal of Experimental Psychology 68(3): 543–567.
Kalish ML, Griffiths TL, Lewandowsky S (2007) Iterated learning: Intergenerational knowledge transmission reveals inductive biases. Psychonomic Bulletin & Review 14(2): 288–294.
Kalish ML, Lewandowsky S, Kruschke JK (2004) Population of linear experts: Knowledge partitioning and function learning. Psychological Review 111(4): 1072–1099.
Kosslyn SM (1989) Understanding charts and graphs. Applied Cognitive Psychology 3(3): 185–225.
Lohse GL (1993) A cognitive model for understanding graphical perception. Human–Computer Interaction 8(4): 353–388.
McDaniel MA, Busemeyer JR (2005) The conceptual basis of function learning and extrapolation: Comparison of rule-based and associative-based models. Psychonomic Bulletin & Review 12(1): 24–42.
Padilla LM, et al. (2018) Decision making with visualizations: A cognitive framework across disciplines. Cognitive Research: Principles and Implications 3: 1–25.
Peirce J, et al. (2019) PsychoPy2: Experiments in behavior made easy. Behavior Research Methods 51(1): 195–203.
Pinker S (1990) A theory of graph comprehension. In: Freedle R (ed.) Artificial Intelligence and the Future of Testing. Mahwah, NJ: Lawrence Erlbaum Associates, 73–126.
Playfair W (1786) Commercial and Political Atlas: Representing, by Copper-Plate Charts, the Progress of the Commerce, Revenues, Expenditure, and Debts of England, during the Whole of the Eighteenth Century.
Questback GmbH (2019) EFS Survey, Version EFS Winter 2018. Cologne: Questback GmbH.
Ratwani RM, Trafton JG (2008) Shedding light on the graph schema: Perceptual features versus invariant structure. Psychonomic Bulletin & Review 15(4): 757–762.
Rosch E (1975) Cognitive representations of semantic categories. Journal of Experimental Psychology: General 104(3): 192–233.
Rosch E, Mervis CB (1975) Family resemblances: Studies in the internal structure of categories. Cognitive Psychology 7(4): 573–605.
Rosch E, Simpson C, Miller RS (1976) Structural bases of typicality effects. Journal of Experimental Psychology: Human Perception and Performance 2(4): 491–502.
Spence I (2006) William Playfair and the psychology of graphs. Proceedings of the American Statistical Association, Section on Statistical Graphics, 2426–2436.
Storms G, De Boeck P, Ruts W (2000) Prototype and exemplar-based information in natural language categories. Journal of Memory and Language 42(1): 51–73.
Tufte ER (2001) The Visual Display of Quantitative Information. Connecticut: Graphics Press USA.
Wickham H, Grolemund G (2016) R for Data Science: Import, Tidy, Transform, Visualize, and Model Data. Cambridge, MA: O’Reilly Media.
Zacks J, Tversky B (1999) Bars and lines: A study of graphic communication. Memory & Cognition 27(6): 1073–1079.

Biographies

DANIEL REIMANN is a Lecturer and Researcher in the Department of Psychology at the FernUniversität in Hagen, Germany. His research interests include learning, memory and perception of data graphs.
Address: Department of Psychology, FernUniversität in Hagen, Universitätsstraße 33, Hagen 58084, Germany. [email: [email protected]]
MARIE STRUWE is a Post-Graduate in Psychology. Her research interests include perception of data graphs.
Address: as Daniel Reimann. [email: [email protected]]
NILAM RAM is a Professor of Psychology and Communication at Stanford University, USA. His research interests include learning, information processing and the dynamics of life-span developmental processes.
Address: Departments of Communication and Psychology, Stanford University, CA 94305-2050, USA. [email: [email protected]]
ROBERT GASCHLER is a Professor of Psychology in the Department of Psychology at the FernUniversität in Hagen, Germany. His research interests include sequence learning, dual-tasking, didactics of psychology and perception of data graphs.
Address: as Daniel Reimann. [email: [email protected]]

Cite article

Cite article

Cite article

OR

Download to reference manager

If you have citation software installed, you can download article citation data to the citation manager of your choice

Share options

Share

Share this article

Share with email
Email Article Link
Share on social media

Share access to this article

Sharing links are not relevant where the article is open access and not available if you do not have a subscription.

For more information view the Sage Journals article sharing page.

Information, rights and permissions

Information

Published In

Article first published online: November 14, 2022
Issue published: February 2025

Keywords

  1. bar graphs
  2. data graphs
  3. line graphs
  4. typicality

Rights and permissions

© The Author(s) 2022.
Creative Commons License (CC BY 4.0)
This article is distributed under the terms of the Creative Commons Attribution 4.0 License (https://creativecommons.org/licenses/by/4.0/) which permits any use, reproduction and distribution of the work without further permission provided the original work is attributed as specified on the SAGE and Open Access pages (https://us.sagepub.com/en-us/nam/open-access-at-sage).

Authors

Affiliations

Marie Struwe
FernUniversität in Hagen, Germany
Nilam Ram
Stanford University, CA, USA
Robert Gaschler
FernUniversität in Hagen, Germany

Notes

Metrics and citations

Metrics

Journals metrics

This article was published in Visual Communication.

View All Journal Metrics

Article usage*

Total views and downloads: 1286

*Article usage tracking started in December 2016


Articles citing this one

Receive email alerts when this article is cited

Web of Science: 3 view articles Opens in new tab

Crossref: 1

  1. Graph schema and best graph type to compare discrete groups: Bar, line, and pie
    Go to citationCrossrefGoogle Scholar

Figures and tables

Figures & Media

Tables

View Options

View options

PDF/EPUB

View PDF/EPUB

Access options

If you have access to journal content via a personal subscription, university, library, employer or society, select from the options below:


Alternatively, view purchase options below:

Purchase 24 hour online access to view and download content.

Access journal content via a DeepDyve subscription or find out more about this option.