People can understand IPCC visuals and are not influenced by colors

We carry out two online experiments with large representative samples of the US population to study key climate visuals included in the Sixth Report of the Intergovernmental Panel on Climate Change (IPCC). In the first study (N = 977), we test whether people can understand such visuals, and we investigate whether color consistency within and across visuals influences respondents’ understanding, their attitudes toward climate change and their policy preferences. Our findings reveal that respondents exhibit a remarkably good understanding of the IPCC visuals. Given that IPCC visuals convey complex multi-layered information, our results suggest that the clarity of the visuals is extremely high. Moreover, we observe that altering color consistency has limited impact on the full sample of respondents, but affects the understanding and the policy preferences of respondents who identify as Republicans. In the second study (n = 1169), we analyze the role played by colors’ semantic discriminability, that is the degree to which observers can infer a unique mapping between the color and a concept (for instance red and warmth have high semantic discriminability). We observe that semantic discriminability does not affect attitudes toward climate change or policy preferences and that increasing semantic discriminability does not improve understanding of the climate visual.


Introduction
Between 2021 and 2022 the Intergovernmental Panel on Climate Change (IPCC) has released its sixth report (Tollefson et al 2021).In light of the urgency of the threat posed by global warming, it is imperative that the message of the report reaches a wide audience, as without broad support it will be hard to pass comprehensive and effective reforms addressing the climate crisis (Bernauer andMcGrath 2016, Ehret 2021).Climate visuals can help ensuring that the IPCC message reaches the general public (Harold et al 2016).First, visuals are an effective and efficient mean to convey scientific information (Okan et al 2012, Fischer et al 2018), as they harness the human visual system's capacity to be a powerful pattern detector (Franconeri et al 2021, Morelli et al 2021).Second, many news media outlets included visuals from the summary for policymakers (SPM) of the IPCC report in their articles (table 1).In fact, 'Visualizations have been the key element in the communication strategy of IPCC' (Xexakis and Trutnevyte 2021).
Nevertheless, finding the right way to leverage the potential of data visualization is complex.A burgeoning literature is attempting to identify effective ways to convey climate-relevant information using visuals (Daron et al 2015, 2021, O'Neill 2017, Christel et al 2018, Xexakis and Trutnevyte 2021, Calvo et al 2022), and some studies have focused specifically on visuals included in the IPCC reports (Taylor et al 2015, Harold et al 2020).Thus far, empirical evidence suggested that people have a limited understanding of IPCC visuals (McMahon et al 2015, Fischer et al 2018).However, these studies relied on small samples and focused on older versions of the IPCC report.
Against this background, we carry out two between-subjects experiments with representative Table 1.A list of some of the news media that included in their articles the visuals from IPCC report we use in study I (SPM.3 and SPM.5) and study II (SPM.4).

Newspaper
Country To address this question, we test respondents' understanding of two key IPCC visuals: SPM.3 and SPM.5(c).
We then turn to study the role of colors because IPCC experts have flagged colors as one of the key factors to guide the user in the experience of processing information (Morelli et al 2021).To give an idea of how central the role of color is, the IPCC Visual Style Guide for Authors uses the word color 128 times in 28 pages.With respect to climate visualization, the research on colors has largely focused on how to convey uncertainty (Grigoryan and Rheingans 2004, Viard et al 2011, Retchless and Brewer 2016) and how to identify the best color scale in quantitative mapping (Brewer et al 1997, Harrower and Brewer 2003, Dasgupta et al 2018).We focus on different aspects and investigate the role played by consistency in color coding and semantic discriminability.
The IPCC aims for 'consistent color coding' within and across reports (IPCC WGI Technical Support Unit 2018).Consistency, however, can take many forms.One way to apply consistent color coding is to always use the same color to describe an environmental event.Thus, for instance, one could always associate green to increases in precipitations.Another way is to always use the same color to describe events with a given connotation.Thus, for instance, one could always associate green to positive events and red to negative events.Figure SPM.3 prioritizes the first form of consistency (figure 1, left panel).
SPM.3 is composed of three panels.The top panel describes observed changes in hot extremes, with increases marked in red and decrements marked in blue.The middle panel describes observed changes in heavy precipitations with increases marked in green and decrements marked in yellow.Last, the bottom panel describes observed changes in agricultural and ecological droughts with increases marked in yellow and decrements marked in green.This use of colors aims also at maximizing consistency across visuals, as other visuals in the report use similar colors in association with these climate events3 .However, this use of colors creates an inconsistent association between colors and events' connotation.In the middle panel green denotes negative events, whereas in the bottom panel the same green denotes positive events.
In our treatment, we focus on the association between colors and events' connotation and mark in red all increases in extreme weather events because they all share a negative connotation, whereas we mark in green all decrements because they all share a positive connotation (figure 1, right panel).
As this example shows, it is not always possible to simultaneously ensure consistency in all dimensions.Moreover, there might be trade-offs between preserving consistency within a visual and across visuals.From this perspective, testing understanding for SPM.3 and SPM.5(c) presents an important advantage.Even if our treatment improved withinvisual consistency in terms of events' connotation, it reduced the consistency across visuals of the report.This is because the report has visuals like SPM.5(c) that use a color coding that is more in line with the one adopted in the SPM.3 than the with the one adopted in our treatment.
Therefore, our second question is: RQ2a Which definition of 'consistent color coding' leads to a better understanding of climate visuals?(study I) RQ2b Does reducing across-visual consistency negatively impact visuals' understanding?(study I) Moreover, scholars have hypothesized that using colors with low semantic discriminability hinders the understanding of visuals (Terrado et al 2022).Semantic discriminability is 'the degree to which observers can infer a unique mapping between visual features and concepts, based on the visual features and concepts alone' (Schloss et al 2020).To put it differently, some colors might more naturally evoke certain concepts associated with climate change.For instance, people might naturally associate red with high temperatures and extreme risk, whereas blue might be associated with low temperatures (Schneider and Nocke 2018).
In the second experiment, we test whether respondents' understanding of a visual is affected by the semantic discriminability of the colors used.The Financial Times (FT) provided us with an opportunity to study the role of semantic discriminability in a setting with real world implications.The main panel of SPM.4 of the SPM describes five possible scenarios in terms of future CO 2 emissions (figure 2, right panel).In this visual the curve describing the worst case scenario is in dark red, whereas the curve describing the best case scenario is in light blue.In one of its articles, the FT included a figure that is almost identical, but has curves of different colors (figure 2, left panel).For instance, the curve describing the worst case scenario is in light blue, whereas the curve describing the best case scenario is pink.The colors used by the FT have a lower semantic discriminability, thus we investigate the following question: RQ3 Does using color with high semantic discriminability improve understanding?(study II) Colors are not only important because they can aid or hinder understanding, but also because they can evoke emotions (Valdez andMehrabian 1994, Kaya andEpps 2004).For instance, red is often associated with concepts like danger and fear (Pravossoudovitch et al 2014, Jonauskaite et al 2019), whereas yellow is often connected with joy (Jonauskaite et al 2019).As colors affect emotions, they might shape the reaction to climate visuals, and in particular the level of concern for the climate crisis.Thus, the fourth question that we investigate is: RQ4 Do the colors used in a climate visual have an effect on respondents' level of concern for the climate crisis?(studies I and II) Last, previous research has shown that some features of visuals can influence policy preferences (Romano et al 2020).While colors should not be used to manipulate people's preferences, ensuring that the IPCC conveys information in a 'policy-relevant but not policy-prescriptive' manner (Waisman et al 2019) requires understanding the role played by colors.Thus, our fifth question is: RQ5 Do the colors used in a climate visual affect respondents' policy preferences?(studies I and II)

Materials and methods
To answer our research questions, we carried out two large-scale experiments with representative samples of the US population.There are at least two reasons to study whether the general public can understand IPCC visuals.First, key actors related with the IPCC explicitly stated that its reports are also aimed to the general public (Lynn and Peeva 2021).Second, many  news media outlets relied on IPCC visuals (table 1).Thus, visuals play a key role in ensuring that the IPCC messages reaches a wide audience.
Moreover, to obtain a precise measure of semantic discriminability we carried out two pre-surveys, which are described in details in the supplementary materials.The complete protocol of all surveys is available in the supplementary materials.

Study I 2.1.1. Visual and experimental manipulation
Our experimental manipulation in study I relates to the main panel of the SPM.3 of the first part of the IPCC report.The respondents who were randomly assigned to the control group saw SPM.3 as it appeared on the IPCC report (figure 1, left panel).Respondents who were randomly assigned to the treatment group saw the same visual, but in this case all the increases in extreme events were marked in red and the decreases in green (figure 1, right panel).Figure 3 summarizes the flow of study I.

Sample, survey design and procedure
We sought to recruit a sample of N = 1000 US residents on Prolific.Prolific provides researchers with the option to recruit a sample stratified across three demographics: age, sex and ethnicity.While our sample was representative across these dimensions, we note that Democrats were over-represented.This is a standard feature of samples recruited online (Arechar and Rand 2021).Respondents were paid $1.1 (hourly compensation $6.6).
At the beginning of the experiment participants saw a text containing information about the IPCC and the IPCC report.They were also informed that the visuals they would see during the experiment were based on the IPCC report.After the introduction, participants were randomly assigned to either the control group or the treatment group.
All participants were asked four sets of questions: (i) policy preferences (table 2, top two rows); (ii) concerns for climate change (table 2, bottom three rows); (iii) understanding of SPM.3 (table 3); (iv) understanding of SPM.5(c) (figure 4 and table 4).Respondents answered questions i, ii and iii while seeing the version of SPM.3 to which they were assigned.They answered questions iv while seeing SPM.5(c).Before the understanding questions participants were asked to complete a simple attention check.
Understanding questions force respondents to think about the visual in a different way from which they would normally do when seeing the visual on a website.Therefore, they were included at the end to avoid anchoring the responses provided to the first two sets of questions.
The literature on graph comprehension identifies three levels of graph understanding (Friel et al 2001, Galesic andGarcia-Retamero 2011).The first level relates to the ability to read the data represented in the graph, e.g. by finding specific information.The   Afterward, respondents answered questions that we use as control.The control questions can be grouped in: (i) graph literacy, (ii) climate literacy, (iii) color-related controls, (iv) standard demographic questions.Both the visual used to test graph literacy and the graph literacy questions were taken from Galesic and Garcia-Retamero (2011), whereas climate Table 5. Questions used to assess participants' understanding of SPM.4.These questions were used only in study II.

Question Range Understanding Level
Estimating when in the 'very high' scenario GtCO2 yr −1 will reach 100? (SPM.4Q1) Five possible answers, of which one correct.
Level 1 Estimating distance at various points in time between curves representing various scenarios (SPM.4Q2)

Respondents must rank the possible alternatives
Level 2 Formulating predictions based on the scenarios (SPM.4Q3) Four possible answers, one of which correct.
Level 3 literacy questions have been adapted from Leiserowitz et al (2011).

Study II 2.2.1. Visual and experimental manipulation
Our experimental manipulation in study II relates to the main panel of SPM.4.SPM.4 was included by the FT in one of its articles, but the FT used different colors from those used by the IPCC.The participants who were randomly assigned to the control group saw SPM.4 as it appeared on the FT (figure 2, left panel).
Participants who were randomly assigned to the control group saw the same visual, but with the colors used in SPM.4 (figure 2, right panel).Figure 5 summarizes the flow of study II.

Sample, survey design and procedure
We sought to recruit a representative sample of N = 1000 US residents on CloudResearch.Unlike Prolific, CloudResearch does not automatically provide a representative sample.Thus, to ensure that our sample was stratified across the same demographics we launched the experiment several times, creating restrictions by age, gender and race matching those we obtained in study I on Prolific.As soon as the target quotas were reached, the experiment was closed for that category of respondents.Respondents were paid $0.9 (hourly compensation $6.75).At the beginning of the experiment, all respondents saw the same message shown in study one.After the introduction, participants were randomly assigned to either the control group or the treatment group.All participants were then asked three sets of questions: (i) policy preferences; (ii) concerns for climate change; (iii) understanding of SPM.4.The first two groups of questions were the same as in study I (table 2).The understanding questions are reported in table 5. Respondents answered all these questions while seeing the figure to which they were assigned.After answering the understanding questions, respondents answered the same control questions as in study I. Respondents were also asked to complete the same attention check as in study I.

Results
Tables 6 and 7 report respectively the summary statistics for study I and study II.

Understanding of the IPCC visuals (study I)
First, we analyze the level of understanding of the original SPM.3 and SPM.5(c).Figure 6 left panel indicates the percentage of correct answers provided by respondents to the understating questions related to SPM.3 and SPM.5(c) disaggregated by level of understanding.We observe that for the levels of understanding tested the percentage of correct answers is close to 70% for both visuals.The right panel of figure 6 shows that reducing consistency between SPM.3 and SPM.5(c) did not worsen the understanding of SPM.5(c), the second visual seen by respondents.

Consistency of colors in climate visuals (study I)
We then compare the understanding of the control group, who saw SPM.3 with original colors (figure 1 left panel), and of the treatment group, who saw our version of SPM.3 (figure 1 right panel).We observe no significant differences between the treatment and the control group.The full regression tables are available in the supplementary materials.

Semantic discriminability (study II)
We now turn to our third research question: whether using colors that have a higher semantic discriminability improves understanding of the climate visual as hypothesized by the literature (Terrado et al 2022).We find no evidence that using colors with higher semantic discriminability improves understanding.We observe no significant difference in the understanding of the visual between the control and the treatment group.The full regression tables are available in the supplementary materials.

Colors, concern for the climate crisis and impact on policy preferences (studies I and II)
We then analyze whether altering colors in climate visuals affects how concerned people are about global warming.In both studies, we observe no effect of colors on concerns for global warming.Similarly, we find no impact of colors on policy preferences.The full regression tables are available in the supplementary materials.

Heterogeneous treatment effects
Last, we look at heterogeneous treatment effects for age (young-adult-older), gender (femalemale-other) and political affiliation (Democrat-Republican-no strong preference).We observe no results when looking at age and gender.Instead, we observe interesting results when focusing on political affiliation.
First, in study I respondents who identify as Republicans who saw the modified version of SPM.3 (treatment group) display a better overall understanding (p = 0.002 in the model with all controls, see figure 7).These results are robust to different sets of controls (see supplementary materials).We do not find similar results for Democrats or respondents who state that they have no strong preference for either party.
Moreover, we find that both in studies I and II the treatments affect Republicans' stated policy preferences (table 8).
In particular, in study I we observe that respondents who identify as Republicans included in the treatment state that they want higher direct subsidies for fossil fuels (p = 0.02 in the model with all controls), while also indicating a higher support for a carbon tax (p = 0.008 in the model with all controls).Moreover, we observe that also in study II respondents who identify as Republicans included in the treatment support higher subsidies for fossil fuels (p = 0.052 in the model with all controls).We do not find similar results for Democrats or respondents who state that they have no strong preference for either party.
Table 8.The impact of the treatments on policy preferences.The table reports the results of regressions with a fully interacted model accounting for the different impact of the treatment on respondents with different political affiliations in study I (top two rows) and study II (bottom two rows).The specification includes a dummy equal to one for participants in the treatment, a dummy equal to one for participants identifying as Republicans, a dummy equal to one for participants not identifying as democrats or republicans and interactions terms for Republicans in the treatment group and respondents with no strong preference for either party in the treatment.The first three columns report the mean and standard deviation (in parentheses) for the variable considered for Republicans (Rep), Democrats (Dems) and no strong preference.Columns 4 and 5 report the coefficient of the interaction term between the treatment and that political affiliation, along with the p-value obtained from a regression controlling for the treatment, the interactions and controls (demographics, the level of worry about global warming, the level of understanding of the figure, the level of graph literacy, the level of climate literacy and participants' feelings about the color scale used).We study the impact of the treatment on the desired level of subsidies using an OLS regression with robust standard errors.We study the impact of the treatment on the support for a carbon tax through an ordered logit model with robust standard errors.Full regression tables and robustness checks can be found in the supplementary materials.While there are no universally accepted thresholds to determine whether 'enough' people have understood a visual, two comparisons help putting these results in perspective.First, the percentage of correct answers provided to understanding questions of IPCC visuals is close to the percentage of correct answers provided to the questions used to test graph literacy.But the IPCC visuals conveyed complex multi-layered information, whereas the graph literacy visual merely represented a linear trend.Second, the percentage of correct responses provided to our understanding questions is higher than that provided by experts in previous research focusing on older versions of the IPCC report (Fischer et al 2020).Thus, a sample of experts might display an even better understanding of the visuals used in the Sixth IPCC Report.Importantly, we note that respondents had limited incentives to understand the visuals because their payment was not conditional on them providing the right answer.

Policy
These considerations suggest that the authors of the visuals of the SPM of the IPCC report did a remarkable job in conveying complex information in a clear and understandable manner.

Consistent color coding and understanding
Using consistent color coding is among the recommendations given in the IPCC Visual Style Guide.However, there is no univocal way to define consistency.In study I we compared two ways to interpret the idea of using consistency.In the framing used by the SPM.3, the colors were consistently associated with a given environmental event.Thus, for instance, increases in precipitation were always marked in green.In our treatment, the colors were consistently associated with a given connotation of an event.Consequently, all positive events were marked in green and all negative events were marked in red.
We observe that the visual shown in the treatment was better understood by Republicans than SPM.3.We did not observe the same effect for Democrats, possibly because they displayed a remarkably high level of understanding already in the control group.Therefore, there might have been a ceiling effect.Moreover, we observed that reducing color consistency across visuals did not reduce understanding of the second visual.This suggests that consistent color coding within a given visual is more important than consistent color coding across visuals.This is even truer, because many news media included visuals from the SPM in their articles, and generally they did not include all IPCC visuals in the same article.And since people are likely to learn about the IPCC report from media, there are practical reasons to prioritize color consistency within visuals over consistency across visuals.
One possibility is that the better understanding achieved by the treatment group is driven by the higher semantic discriminability of the colors used in our treatment.In fact, green is often associated with gains, whereas red is associated with losses (Fischer et al 2020).Instead, in the original visual of the IPCC the color green was mostly used to indicate negative event.While we cannot rule out this alternative explanation, we note that in study II we find no impact of semantic discriminability.

Semantic discriminability and understanding
In study II we analyzed the role of semantic discriminability.The control group saw the visual with the low discriminability colors used by the FT, whereas the treatment group saw the visual with the high discriminability colors used by the IPCC.In line with the literature (Terrado et al 2022), we were expecting high semantic discriminability to foster understanding.However, we observed no significant differences between the treatment and the control group.
Clearly, our results do not imply that semantic discriminability is never important.However, they show that the relationship between semantic discriminability and understanding is likely to be nuanced and context dependent, instead of monotonic and universal.

Colors and concern for climate change
Colors are known to have an impact on emotions (Jonauskaite et al 2019).Thus, an important question is whether the colors of a visual might affect the emotional response to the visual.For example, using colors associated with calm and peacefulness might lead people to underestimate the dangers conveyed by a climate visual.Contrarily to our expectations we observe that colors have no impact on how concerned people are about the climate crisis.One possible explanation for this finding is that climate change is a highly polarized and debated topic, and hence people are likely to have been exposed to information and partisan cues (Goldberg et al 2021).To put it differently, they are 'pre-treated' by media (Bernauer and McGrath 2016).Nevertheless, we do not suggest that colors never have an impact on the how concerned people are about the climate crisis.

Colors and policy preferences
Recent evidence suggests that the features of a visual can affect policy preferences (Romano et al 2020).However, we find that colors affect policy preferences only of respondents who identify as Republicans (table 8).Our results are consistent with recent evidence suggesting that conservatives' preferences with respect to climate policies are affected by framing (Marlow and Makovi 2023).Intriguingly, in study I we observe that Republicans in the treatment group state higher preferred subsidies for fossil fuels but they also indicated a higher support for a carbon tax.These are not the results that we were expecting, therefore we do not advance a post-hoc hypothesis.However, we emphasize that our results provide strong preliminary evidence that colors of climate visuals affect policy preferences.Consequently, we believe that more studies are needed to understand through which mechanisms colors influence policy preferences.

Conclusions
Empirical evidence suggested that the visuals included in the previous versions of the IPCC report were often misinterpreted.We run two large scale surveys with two representative samples of the US population and find that people show a remarkably good understanding of the visuals used in the most recent version of the IPCC report.Moreover, we investigated the role that colors play in climate visuals and the extent to which they affect understanding of the visuals, policy preferences and concerns for the climate crisis.This study only focused on three visuals from the IPCC report and only included US respondents.Future studies should test whether also other visuals are equally clear, and if also people from different countries display a good level of understanding.

Figure 1 .
Figure 1.The left panel represents SPM.3 as it appears in the IPCC report.The right panel represents our treatment.Reproduced with permission from IPCC (2021) Summary for Policymakers.

Figure 2 .
Figure 2. The left panel represents SPM.4 as it appears in the Financial Times.The right panel represents our treatment, in which we used the colors of the IPCC report.© FT Visual: Camilla Hodgson, 2021, Global warming will hit 1.5C by 2040, warns IPCC report, FT.com, 9 August.Used under licence from the Financial Times.All Rights Reserved.

Figure 3 .
Figure 3. Flow of study I.

'Figure 4 .
Figure 4.Figure SPM.5(c).This figure was shown to respondents in study I as it appears in the IPCC report.Reproduced with permission from IPCC (2021) Summary for Policymakers.

Figure 6 .
Figure 6.The left panel shows the percentage of participants in study I answering correctly questions aimed at testing level-1 and level-2 understanding of SPM.3 and level-2 understanding of SPM.5(c).The right panel shows the percentage of participants in study I answering correctly to understanding questions related to SPM.5(c) divided by treatment and control.

Figure 7 .
Figure 7.The panels above show the percentages of respondents who provided the correct answer to the understanding of SPM.3 by level of understanding (level 1 and level 2 of understanding).The results are presented for the full sample and disaggregated by political affiliation.

Table 2 .
Questions used to assess participants' policy preferences and to study concerns for global warming.These questions were used in study I and in study II.The questions used to study concerns for global warming are taken from 'Climate Change in the American Mind: Beliefs & Attitudes' .

Table 3 .
Questions used to assess participants' understanding of SPM.3.These questions were used only in study I.
'There are NO AREAS in which precipitations have DECREASED' (SPM.

Table 6 .
Summary statistics (mean and standard deviation) for study I for the full sample and by treatment group.

Table 7 .
Summary statistics (mean and standard deviation) for study II for the full sample and by treatment group.

Do people understand the visuals included in the SPM of the IPCC report?
Scholars who carried out studies analyzing people's understanding of visuals included in previous versions of the IPCC report had lamented that the results were disappointing(McMahon et al 2015, Fischer  et al 2020).On the contrary, we report that respondents show a remarkably high level of understanding of the visuals we analyze.