TALATERRA

Kathryn Owen, Evaluating Community Events

Episode Summary

Kathryn Owen is an independent evaluator who helps zoos, aquariums, nature centers, and conservation groups assess the effectiveness of their exhibitions, programs, and initiatives. She has worked in the nonprofit sector for more than 20 years. I reached out to Kathryn because I wanted to speak with her about doing evaluation in the kind of community settings where freelance educators often work — settings such as an environmental education fair, a neighborhood conference, or some other community event. How many ways are there to evaluate the impact of brief interactions at community events? Let’s find out.

Episode Notes

Kathryn Owen is an independent evaluator who helps zoos, aquariums, nature centers, and conservation groups assess the effectiveness of their exhibitions, programs, and initiatives. She has worked in the nonprofit sector for more than 20 years.

I reached out to Kathryn because I wanted to speak with her about doing evaluation in the kind of community settings where freelance educators often work — settings such as an environmental education fair, a neighborhood conference, or some other community event.

How many ways are there to evaluate the impact of brief interactions at community events?

Let’s find out.

 

LINKS

Email Kathryn Owen

Kathryn Owen (LinkedIn)

Kathryn offers free one-hour phone consults and is happy to offer feedback and suggestions on evaluating your programs.

Helpful resources for those seeking more about evaluation include:

My Environmental Education Evaluation Resource Assistant is an online tool that takes you step-by-step through the process of designing and implementing your own evaluation.

The Visitor Studies Association is a professional organization for anyone engaged or interested in visitor research and evaluation.

Informalscience.org offers a large archive of current and past evaluation reports from projects in the field of informal learning. Skim through the reports in the Evaluation section, and you’ll see what kinds of evaluation tools and approaches others have used to measure the outcomes you’re interest in.

North American Association for Environmental Education (NAAEE)

Practitioner Guide to Assessing Connections to Nature

Institute for Learning Innovation

American Evaluation Association

Poll Everywhere

Episode Transcription

Tania Marien: Welcome to Talaterra, a podcast about freelance educators working in natural resource fields and environmental education. Who are these educators? What do they do? Join me, and let's find out together.

This is your host, Tania Marien.

Today my guest is Kathryn Owen. Kathryn is an independent evaluator who helps zoos, aquariums, nature centers and conservation groups evaluate the effectiveness of their exhibitions, programs and initiatives. She has worked in the nonprofit sector for more than 20 years. Kathryn develops evaluation tools to measure people's sense of compassion and empathy towards nature. She also does a lot of evaluation related to conservation behavior change. Kathryn also coaches and trains others in evaluation. She works with practitioners in environmental education, and works with graduate students in the Museum Studies Program at the University of Washington.

I reached out to Kathryn because I wanted to speak with her about doing evaluation in the community settings where freelance educators often work. Settings such as an environmental education fair, a neighborhood conference, and other community events. It is my pleasure to introduce you to, Kathryn Owen.

Welcome Kathryn, and thank you for stopping by today. I so appreciate having the opportunity to speak with you about evaluation. When I ask educators how they know if they've been successful at delivering their programs, their response usually references a guest smiling face or their enthusiasm. And I know we can't all do longitudinal studies, especially at a local community event where we don't even learn the people's names or visitors names, who we get to interact with. I often think about how many ways there are to evaluate the impact of short moments, the short interactions that environmental educators have at settings like community events, besides maybe a comment box or flip charts or something like that. What other options are there for environmental educators? That's my big question, and that's why I've reached out to you so I really appreciate you being here.

Kathryn Owen: Sure. Yeah, I think there's a couple of different elements of your question. Right? So, one is that, I think very often people think that either they can do something sophisticated in terms of evaluation, for instance, or something extremely time consuming and resource intensive, like a longitudinal valuation, or they can't do anything. But there's two extremes. And I think there's a continuum. So whenever you're looking at, evaluating the impact is something that you're doing, I think you need to start by getting really clear about what the purpose is. Is this information that you're trying to gather, simply for your own benefit so that you can make your programs more meaningful and valuable to the folks that attend? Or on the other hand, is it a program that you're pilot testing, and you want to scale this up substantially over the next few years?

Well, in that case, then I think the burden of evidence that you need is quite a bit more significant, right? If you're looking at spending a sizable amount of money, if you're looking at trying to scale up a program, if you have funding partners who are interested in some pretty solid data on the effectiveness of your program and the impact that you're having on audiences, all of that really dictates then the level of formality or informality and of the evaluation that you do.

But you can, I think at the same time, you can certainly make any evaluation as informal as it is. There are important ways to pay attention to the accuracy and the validity of what you're doing. So there's some really common practices that I think we tend to fall into which don't get us back very valuable data. And that's problematic on whatever scale, right? So I guess would be my first point is that, rather than jumping into, I want to evaluate XYZ, is to really make yourself think about, what am I hoping to learn? Most important question is, what will I do with that information? And then what decisions do I need that work to inform?

And I do think, you mentioned that sometimes people say, "Well, it's the enthusiasm of the crowd. That's how I know I'm doing a good job," or it's the smile on children's faces. And I think that those are completely legitimate ways, because we all evaluate, right, we all evaluate constantly throughout the day, throughout our lives. We're always making determinations based on evidence. And I think anyone who is a good interpreter and has been at this work for a period of time, definitely does know how to get that sense from your audience, of whether or not the information that you have, the way in which you're presenting it, whether that's hitting home. So I think we do get valuable information from that.

What we can't do is really collect that information systematically. Because we've been talking a lot about biases, right? The biases that we're all subject to in our society. And it's certainly true on lower stakes issues as well. Every evaluator can tell you that when they were collecting data at an event, that their perception of what was happening at the event did not mesh up or coincide with the data they actually collected. I think there's a phrase, availability heuristic, for that. But it's really just, we see what we want to see or what stands out to us. And it's very hard for us to, in an unbiased way, collect data.

And so, a lot of evaluation, a lot of the data collection methods that one would use, are really giving you a way to systematize that information. I'll give you one example, which is the kind of thing that any program could do regardless of what staff or training and evaluation you have. I worked for many years before working as an independent consultant, I worked at Woodland Park Zoo in Seattle for 20 years. And I founded the audience research department there and we had a small team of folks that were doing evaluation. And we were asked by the keepers, the Raptor keepers, to evaluate their program. And actually the initial conversation was a couple of the Raptor keepers felt, just from their own observations, felt that the program was not reaching pet families with young children. That young children were turning off and were not interested.

Other folks on the team thought, no. I feel like we're hitting a home run. I feel like adults as well as their children I can tell are enthused throughout the experience. So our initial step was simply, okay, if their question is, "How are young children receiving this program?" We simply stationed a couple of people at the exit and timed how long it took before people began leaving, and then we looked at what people began leaving. And found out that within 10 minutes, starting at seven minutes, families with children elementary age and below would start peeling off. And by 10 minutes, very few of them were left. The program went for 20 minutes.

And so we were able then to share that with the staff. And it was simply the initial measure of whether or not the attention was being held. So the keepers then went and developed a 10 minute program for young children, which I believe, well, it's hard to say right now because everything is shut, the zoo just opened up but they're not doing programs. But that ran for a number of years and has been really successful. So I think depending on the questions that you want to answer at a program, there are a variety of ways to go about getting that information. And again, the level of formality that you need varies quite a bit.

Tania Marien: That brings me to my next question. One of my objectives is to demonstrate that the independent professionals in the environmental education field make good community partners, and they should be considered as community partners. But when they work independently at community events, that type of thing, often they work by themselves, or maybe there is one other person. But everybody's busy, busy, busy interacting and doing what it is that they do. And while smiles and enthusiasm and all that is great and is valid as you've said, that's a legitimate feedback, to have communication or to have a conversation with community partners, there needs to be something more. Like you said, systematize, something on paper to document the effectiveness of what it is that you do.

And so, if an educator is working by themselves at an event, what type of thing can they do that's kind of on, they can set up and view I guess, I know that's a bad thing to say. But on the honor system, so that it collects information as intended.

Kathryn Owen: Right. And that's an interesting question. So I haven't thought about, because I've worked pretty much exclusively with organizations, I haven't thought about an individual who's coming in to do, say one component of an event. I guess my first thought is, because I think it's so important to always try and make sure that everybody involved in an event including all the community partners are upfront about their goals and what they hope that their participation in that event does for their organization as well as for the audiences that attend.

So I think my first recommendation would be to talk to them about, is there a plan? Do you have a plan to evaluate the event? If so, that would be ideal because people experience events holistically and the context of the area really matters. And so ideally, you're getting information about each component, including the component that you deliver, of that program, because it all works together. But if you're someone that feels confident and has some ideas of how to go about doing evaluation of your own presentations, I think if they don't have an evaluation plan or haven't thought about that, I think you can certainly raise that. And then let them know what methods you're considering employing and see if they'd like to do that for the event as a whole.

I think, and you mentioned longitudinal studies, I think a really important concept here is that, we know from extensive research that a five minute or a seven minute encounter does not have significant long lasting impact much of the time. There are some ways that it can, right? Somebody might meet an animal they've never seen before and remember that in the future. But often, it's really programs that you really want to invest a lot of attention into. I would argue, when it comes to evaluating the effectiveness, are those that you put a lot of resources into. So something like a summer camp. I did the longitudinal evaluation of a bug club a few years ago. And these were kids from three to eight years old, who come every month and commune about bugs and learn about bugs and look at bugs. We did some phone interviews with them 10 years after the fact, and got some great information about the impact that it had in their lives. But this was a program that most kids were in for a minimum of two years.

So the impact that can have is much more extensive, right? Than a five minute program. So I don't think there's much percentage in trying to look at, just as an example, and try to follow people over a period of time who've come to a brief program. One of my guidelines in terms of evaluation of brief programs is, you don't want to make the evaluation experience longer than the actual experience, right? Because we need to honor the fact that this is informal learning, generally, right? And people are there to have a good time and not to feel as if they're taking an exam.

And so some of the quick things that I do are, with young children, one thing I've done effectively with a number of events with young kids is, if they're going to meet four or five animals, for instance, encounter them up close, is have a picture of each of those animals beforehand, have kids sort those pictures into a pile, I like this animal, I don't like that animal. That takes about 10 seconds, then they do it again, at the end of that program. If you have a number of kids do that, you can get some really solid data and you can look at statistically significant increases in the number of kids who say they like a certain animal after that program. That might take them, as I said, probably it takes them less than a minute to do that sort, that card sort we call it, at the start and in the end. You can do it on a big board with velcro and have kids move stuff around before and after a program.

You can, with adults, one of the things that people more and more are using at programs is cell phones. Because cell phone technology is so ubiquitous. So there are apps, one example is Poll Everywhere. So you simply ask everybody in the audience, go to this site now and it'll ask you three questions about this program that you just attended. You could do it at the start of the program, and then again at the end of the five minute program. And even if you did both of those, it would take users a really short amount of time to have to do that, to have to complete that.

So I've also had people effectively at programs do a Likert scale if you happen to have someone there, and you can even hand out these sheets ahead of time, right? To the seat of everyone who's attending the program. There's a tool called a semantic differential which simply takes two words, an example would be ugly, beautiful. So two diametrically opposed words. And then you have a scale, a Likert scale. So one might be ugly, five might be beautiful. And you can have people rate the animals that they learn about on that scale before and after.

And whenever I've done that, the encouraging thing, I think, for us as environmental educators is that you don't usually see significant change with the love for the animals, where you see significant change when you're doing it right is with animals that are not at the top of anyone's list. Right. So I'm thinking of a vulture program, a spider program, programs where we saw people take pretty significant jumps in how they would characterize those animals after hearing about them and seeing them up close. I think hearing about them is a critical piece, right? Because we also know that framing how you talk about an animal has a huge impact on how people perceive that animal. So those are some tools that can be used and there's certainly more. But in terms of things that are really quick for different audiences for a very short program, where it doesn't make sense to invest a lot of time and energy and resources.

Tania Marien: That's very helpful information. Okay, so we're talking about animals here. My previous endeavor was plant based education. And so plant, what about plants? Plants don't quite get the silver spots. What do you think? Or what have you observed that is a fair assessment of people's feelings towards plants?

Kathryn Owen: Well, I guess what I'm thinking about is, I've evaluated some family nature clubs. So I don't know if you're familiar, I'm sure you're familiar with Children and Nature Network. One of the things that Richard Louv and others developed, which is a great source too of research articles on the relationship between children and nature, but also in a lot of those research articles, you can get ideas about evaluation techniques as well that you might want to use. But family nature club, I think that I've seen a really positive shifts in children's interest in natural objects as a result of having an experience where they and their caregivers are in nature doing facilitated activities over a period of time.

So what I often see, every child is interested in natural objects, right? If you go outside and spend some time. But what I've noticed with kids and also their families, is if you do some more focused interpretation, and facilitated activities that relate to plants, is that you'll see children's interest in what they are start developing. So instead of just collecting things, then they start coming up with more questions. Why is this one shaped like that? They develop more interest in those plants.

I don't know, trying to think. I haven't followed any family nature club kids over a long period of time. So I think in terms of the thing that I have seen shift, again, is interest in plants, interest in collecting plants and increase in questions that kids have about them. But I think like anything else, it really... I think one thing that focusing on evaluation has taught me is that, that question of, "Well, what do people think about XYZ?" It is so influenced by what we as environmental educators say, and model for people. And so, there certainly is a lot of research on attitudes towards plants. There are researchers, there's a guy named Kim-Pong Tam, who's in, I'm trying to think of what university he's at in Hong Kong, but he's done a lot of research on empathy. And he believes that there's evidence that people can feel empathy for the natural world, not just for animals, but for nature as a whole.

And I think that's a really interesting concept, to think about empathy and compassion for all living things as a construct that can be influenced by life experiences and by educational experiences and so on. But yeah, trying to think about any other plant specific knowledge. [crosstalk].

My work is very animal heavy. I have to admit. Animals and then climate change. A lot of it is focused on climate change education recently. But a lot of that. Because I live in the Pacific Northwest, a lot of that has been related to ocean acidification, learning about the impact of climate change in the oceans.

Tania Marien: How did you become an evaluator? Did you have a background in museum studies?

Kathryn Owen: No. People use the phrases, audience research and evaluation, somewhat interchangeably. Audience research often includes program evaluation, maybe exhibit evaluation if you're at a museum, and then psychographic research and demographic research on participants. So it didn't used to be a thing, audience research, right? People did it in marketing. Certainly they spent a lot of time trying to figure out what would influence people in a certain way. And that's been going on for a long time. The field of visitor studies and audience research is relatively new. There are now a handful of programs around the country, graduate programs, including at the University of Washington, at George Mason University, and a few others.

But when I went into the field, I actually had worked as a community organizer, and union organizer, right out of college for about 10 years. And I was raised to spend a lot of time in nature. So my family went on a lot of hikes and camping trips. But I have to say for about 10 years after I got out of college, I was not at all focused on the natural world. And then my first husband passed away after that first 10 years of community organizing and union organizing. So I took a break from that, it was really demanding work in that couple of years when he was sick. And then after he had died, I started to just find myself in parks, and I would just sit for hours in parks. And I physically felt that I needed to be outdoors, which was so interesting to me because I hadn't felt that need for a long time.

And I think this sounds cheesy, but it's actually true. I felt after that, I need to give something back because I understand now. I understand what nature and interactions with living things do for people and why it's so important. So I began volunteering at a zoo. I was then hired to work in [inaudible], and so I was doing fundraising. And as a fundraiser, you need to be able to articulate why it matters. Why does this program matter? Why is this exhibit important to build? And I was so interested in how people were perceiving the animals that they saw, the naturalistic habitats that they saw.

And so I'd walk around the zoo just eavesdropping on people and thinking this is probably borderline illegal, but I'm just so interested in this. And it would help inform my work in development. But I became so interested in that audience perspective on their experience that I ended up going back to graduate school. I had studied sociology as an undergrad, and I got a master's degree in education. But the most important thing I did is that I apprenticed myself to an organization called the Institute for Learning Innovation, which is now in Oregon, but they were based in Annapolis, Maryland at the time. And they did a lot of work for the Smithsonian. And so I worked with them and worked with Smithsonian's Natural History Museum and American History Museum.

And I really just approached them and said I want to learn the ropes. Because they were one of the few organizations at the time that I could find that was doing audience research. And they were doing it as a consulting for, as I said, the Smithsonian and others. So I apprenticed with them and then came back to the institution I was at and pitched the idea that we start doing audience research. And that took about a year of persuading. I really wanted to make sure that it was perceived as valuable, and to legitimize that, to establish that there was a need for it. So I spent a lot of time talking to everyone that worked at the zoo, the zookeepers, the horticulture staff, the marketing staff, PR, the leadership board members, to try and figure out, "Do you have questions about our audience? What are those questions? If we put together a research plan, what questions should be at the top of that list?"

So, yeah, I ended up doing that, we were fortunate to... In Seattle, I think the first few grants that Woodland Park Zoo received were from the Bill and Melinda Gates Foundation, who had a very strong interest in impact. They were not going to give money unless we could demonstrate the impact of what we were doing. And now it's certainly not just the Gates Foundation, but I think a lot of funders, regional as well as national have played a really important role in terms of raising all of our consciousness about the need to understand our audiences. And to look at the impact that we're having.

I guess my kind of driving, and this is another long answer, but Marien, I'm heading towards the finish line. My driving purpose for doing evaluation is that we don't have a lot of time. You and I know, we don't have a lot of time when it comes to saving the earth, right? That used to sound like hyperbole, but with now increasingly people recognize that that's true. And so we cannot, I would argue, we can't afford to be doing programs that we think people like, right. I think I feel strongly that any anyone developing a program needs to be able to identify how are you going to know that you're making a difference? Because we need to know as a field of environmental education, we need to know what the most effective strategies are for helping people care about the earth and put that caring into action.

In addition to just making sure that we're asking those important questions when we're developing a program, from the outset, I think it's really incumbent on us to think about, are we using tools that other institutions or other educators have used to measure the same constructs? So as an example, sense of connection to nature, often people talk about that. You ask 10 people, what does that mean? They probably have 10 different definitions. But NAAEE, North American Environmental Education Association has this awesome initiative where they're developing a toolkit for how you would go about measuring sense of connection. So they're giving environmental educators around the country or throughout the world, really, they're recommending a few standard tools that anyone could use to measure whether or not your program is increasing connect to this nature.

Because that way, we can pool that information using the same tools and we can identify what were the most effective programming choices and offerings out there. When we each develop our own survey, we can't compare anything. And I think we need to, that's part of just really taking this measurement business more seriously. And trying to do it as much as we can in a coordinated way. Not that I want to scare people off if they've not done an evaluation of their programs, and that doesn't need to be your starting point. But I just think that's really something important that our field is moving towards, and it would be great for people to, listeners could check out, NAAEE and get a lot of information about that initiative, for instance. And see if they're interested in becoming part of that and using some of those tools with their program.

Tania Marien: Yeah, that's a good resource. Yeah, thank you. What should a freelance educator do to begin to evaluate the program? In your experience, does a program need to be completely dismantled, or how do they need to step back and look at what they have and then build it up to be something that can be measured by some instrument?

Kathryn Owen: Yeah, I think the first thing I would do, and I find many organizations have been collecting data for a while. And this probably is not as true of individuals. But I think sometimes people do collect data, but don't necessarily use it. I can't even come up with a number for how many organizations I've asked, when they've told me, "Oh, We have all of our participants fill out surveys." And when I ask them what have you learned from these surveys, what do you do with the information? Often, the response is, "Well, we don't really have time to look at it." And then they show me a file cabinet with all these surveys that they've never entered or analyzed. And it's an issue of time. It's that feeling that, well, I should do a survey, but then they're not able to follow through with it. And I think that's a problem, right? Again, I really feel that people that come to our, the experiences that we offer, if we're going to ask them to do something and give us feedback, we have a responsibility to use it or don't ask them.

And so, one, if you have been collecting any information, I think it's great to just ask yourself, what is it telling me? Am I using it? Do I still need it? If you haven't been doing any audience research with the programs that you're looking at, I think sometimes anything can collapse under its own weight if it gets too big. And I think the same can be true of coming up with an evaluation plan for all your programs.

So the clients that I work with, I often recommend that people start small and think about what is one particular program, What is the question, and say there's no funder involved. It's really just for your own information, and perhaps the community partners that you're working with, if they've expressed an interest in it. And then look at what do I hope that my program is doing for people? What question do I have about what they're taking away from it? And really identify one or two or three, you can think of them as research questions, right? So I don't mean a question that you would ask a visitor or an audience member. But an example would be, I've been talking about vultures forever, and how important they are, as part of their ecosystem. Is that making a difference? Are people more interested? Are they more likely to read information about vultures as a result of this experience, right? So think about one or two questions and really start there.

There are a lot of resources and we might want to list these. Unless you have them on your site, but I can certainly send you a list. But there are a lot of resources that you could review to look at what some options might be in terms of tools that you might want to use. Right? We often think of a survey first, because we've all taken a million surveys in our lifetime. But you can learn a lot by just watching, by observing, depending on what your question is. You can do a lot of things online with children, especially children and participants for whom English is a second language, then I often recommend using graphical images. So using photographs or images, having people draw. I've done a lot with journaling, and that's with longer programs. Right. So maybe that's a program that lasts for a summer, is having participants respond to different prompts and then using that as information.

I'm sure that people are familiar with this, but there's quantitative and qualitative information that folks often hear about. And the thought used to be that if it wasn't numbers, if it wasn't quantitative information, then it had no validity. And the feeling about that is really changed. And so I think there's much more interest and openness towards qualitative approaches as well. Qualitative simply means that you're getting words back as the source, the type of data, as opposed to numbers, and they both have value. And often, it's good practice to really look at approaches that incorporate both.

So I guess, again, just going back to, I would start small, I would start with one program and a few questions that I have, and then just start looking at what are some potential tools that I could use that would be relatively easy for me, relatively easy for the participants that come to the program to fill out. And then there are organizations, Visitor Studies Association is one of the really premier organizations in this country, of folks who are interested in audiences, audience research and they have a ton of really useful information on their website, as well as conferences and listserv. So you could participate in listservs, or web chats or webinars, if you wanted to get more information. There's a lot of offerings out there now.

I think in environmental Ed, we used to be so focused on knowledge, right? That was the only thing when I first started doing this work, that people ever wanted to measure. Are they learning more about ecosystems, are they learning about the interrelationship within this particular ecosystem? I think, as a field, our understanding has grown, that attitudes and emotions are extremely important. And behavior is extremely important. And so I think that there's been a lot of focus and attention given by researchers over the years to measuring emotions and attitudes. So, empathy may sound like something that's difficult, would be a challenge to measure. But there are a lot of different interesting tools that people can use.

So I would never, because I often get that question, well, I want to measure, we want to know whether or not this is changing people's hearts and minds and I know that's impossible. So what should we be asking them? But really, nothing is impossible, in the sense that there are people developing tools to measure anything you can think of, really, in terms of the types of outcomes that we in environmental education are interested in. It's just a matter of tapping into those resources. And again, we can provide a list of some of those resources.

Tania Marien: That'd be wonderful. Thank you. In your experience, what needs to be investigated in the field of environmental education? What hasn't been researched well?

Kathryn Owen: I think that behavior change, I think to me, there's just such immediacy around behavior. And I think that that is something that until fairly recently, I don't think that we had a lot of good strategies or were implementing a lot of strategies for measuring what people do. I think that sometimes it can be more challenging, right? I think people often feel like, "Can I actually get honest information from participants about whether or not a particular event or my talk move them to action?" Right. But I think that there are ways to get around the challenges. Most of us want to be seen in a good light. So people will, if you're not careful about the wording of your questions, people will tend to give you a positive interpretation of their own behavior. Again, I think it's something that we all do.

But there are ways to ask that question, to ask questions about behavior to get really useful feedback. I think that one of the most promising things I've seen a few organizations doing recently is to partner up with say your natural resource providers in your area. So maybe partner up, for instance with the energy company. So maybe you're really interested in trying to tie your interpretation of a particular resource to the action of saving energy, turning lights off when you leave the room, for instance.

So a couple of organizations have recently been working with energy companies in their city to look at, "Okay, we can tell you because we're collecting zip code data, what parts of the city people are coming from to participate in our programs. Now let's look at their actual energy use and see if we see a change over time." Now, a one off talk is not going to move a neighborhood, but if you were doing talks in a focused area for a long period of time, and could approach, maybe some community partners including something like an energy company or your local water company, your city, whoever it is, to try and get that on the ground data on impact.

Short of that, what I've seen over and over again, I do a lot of reviewing of instruments and evaluation plans too. And I think that we are too often asking people, as a result of coming to this program, do you feel more motivated to do XYZ? Do you feel like it's making a difference when it comes to your personal behavior? And I think any reasonable person is going to say yes, because you feel more motivated at the time. But that doesn't translate into action. So there are ways that you can ask about that. What are the most important is, it's a little more time consuming, but ask that through an open ended question. Is there any way in which you feel like this program is changing your own behavior? If people don't write anything? If they leave that blank, then you've got a pretty good answer there, versus if they actually gave you.

So I think in the interest of saving time, we ask too many closed ended questions about conservation intent. And I think it's given us a false sense of the value, or the impact or some of our programs. I see these many organizations, large and small, I think are using questions that I think are really, unfortunately flawed, and predisposing people to answer in an affirmative way. So I think that really continuing to dig into what is the combination of factors and facilitation that influences people to engage in certain behaviors, I think is a super important area of work. I think the other thing which, the first thing that people need to do if they've not already, I think, in looking at that question is to figure out what is the action that we want people to take?

Very often I've had program presenters or designers say to me, "Well, anything they do would be great, anything they do in terms of environmentally responsible behaviors would be great." Well, I think if that's how you feel, that comes across to the participant, right? I've evaluated, I don't know how many programs, where people have come back and said, they talked about actions a lot, but I can't remember a single one. And so I think the first thing that we need to do as individuals and organizations is to be crystal clear about what is that one action that you're motivating in a program? Don't give people a suite of actions. They want to know because you're an authority right? You have researched the subject presumably, anybody can pull up within seconds online, a list of 10 things to save the earth. But people really have this burning desire to know what's going to make a difference, and I think we all do, right.

So if you as the program presenter, and an authority in their mind, can say, you know what, there's a lot of things people can do, but the evidence I've been following shows me that reducing food waste is one of the most important actions. And so that's what I'm really focusing on, and would really recommend you, and here's the tools of how to do it. Right? Because people need not just content knowledge when it comes to behavior change, but they need procedural knowledge. How do I go about doing that, is often the stumbling block. So I think it's not just measuring, but even before that measuring behavior change, getting clearer about what the behavior or behaviors are that you're encouraging.

Now, with younger children, I guess, and I'm sure this is something that you've probably talked to previous guests about a lot, but I often think of it as, our outcomes for younger children are not so much about here's the one action you should take, because we're trying to grow environmental stewards. So I think there's a couple of different paths and those are somewhat different outcomes, but certainly extremely important, right? That we're delivering programs that encourage kids to take on a ethic of stewardship that lasts throughout their lifespan. Which is different, but I think, again, parallel in importance in terms of that, and older kids and adults motivating individual actions.

Tania Marien: How has the valuation changed in light of the current moment in history? What do you think will happen to it?

Kathryn Owen: Which aspect of the current moment?

Tania Marien: Okay, the virus. That's a good question. Well, lets start with the virus.

Kathryn Owen: Yeah. Well, I think I know that the clients that I work with are doing everything online, right. And so people have a lot of questions about how do we assess our impact of online experiences and through online tools. It's interesting to me, if you think about the museum field, which I know better than other fields. A lot of organizations were lagging behind still when it came to virtual experiences. But everyone is there now, right? Because we've really been pressed to do it.

And so there's a lot more energy being devoted to things like Google Analytics, right? How many people are looking at my website? What pages are they looking at? How long do they spend there? Are they sharing content? Or do they like content? Helping people evaluate their Facebook live streams. Really, all of the developing ways to evaluate online experiences in terms of a pandemic, I think that's the huge impact.

The other huge impact unfortunately, of course, has been job loss. This enormous job loss. I know in environmental Ed, and in the museum field. And so, there's been a lot of conversations about just that, holding the field together and trying to make sure that the expertise and knowledge doesn't go out the door with some of these job losses. Obviously, the other important things about this moment are the climate crisis. Two of many, right? Climate crisis, and then the awakening around social justice and the impact of structural racism on all of us. And those each could have a conversation of their own, but I think there has been a movement over the last five years, at least, but still relatively risen towards culturally responsive evaluation.

So just one example of that is that in the past sometimes that people had an audience that included people who spoke Spanish as their first language, had somewhat more limited English skills. People would often do a survey of only the English speaking audience because they didn't have money to translate it. And I think we just can't allow that to continue happening, right? Because we know that we're leaving out and a huge and important part of our audience. But there's a lot of different dimensions of really trying to honor and respect the people who have been the victims of structural racism for many years and the implications it has for audience research.

I think in terms of climate change, and I think in terms of poverty and the economic impacts of the pandemic, there's more, especially within the American Evaluation Association, AEA, which is another great resource for people to learn more. There's been a lot of conversations about really looking at what are the most critical questions that we can help to answer within society right now. So how do we get ourselves in a position to be able to answer those, and to help our clients think about those big questions, because we don't have all the time in the world to figure them out.

Tania Marien: Yes. Where can the educators listening to this podcast, learn from you offline?

Kathryn Owen: So I have a website, which is just my consultant. The name of my business is Kathryn Owen Consulting. And so the website is kathrynowenconsulting.com. This week, it's undergoing some renovations. So depending on when you air this, let me know so I can make sure that it's legitimate, that the link is legitimate and works. LinkedIn, my LinkedIn profile has got, you could certainly contact me that way, and has some information about the clients that I work with and some of the services that I offer. Anyone that lives in, I'm putting in a plug here, anyone that lives in Washington State who might be an independent educator, a group of us are doing pro bono, a series of web chats about evaluation during the pandemic, to give some people tools. And so we're doing that for museums and others throughout Washington State. And so people could certainly send me a note if they're interested in that.

What I always do, if it's anyone that has interest in talking through a project, right, so you don't know where to start, you want to start looking at evaluating one of the programs that you deliver, you don't know where to start. I always do like a 45 minute call for free at no charge, to help people think through something, and I'm always happy to do that. Sometimes those turn into, sometimes it's a 45 minute chat to see whether or not we want to work together on a bigger project. But I'm fine too, if you just want to talk through something and you don't have any resources. Because I guess I've just been putting more and more focus in my own practice into not just evaluating stuff that others do, but to really help people develop those skills, because I think it's so important.

We didn't talk about this. But there's only so much time in the day to talk about things. But I was just thinking, people that work in interpretive design, that might be listening to this, that develop text for signs at a park, for instance, one thing that would be great to look into for folks in that, that do any kind of writing, but in particular, if you're developing things that a public audience would use, is to look into prototyping.

There's no reason to develop a sign and put it out in a park that hasn't been pilot tested. The first thing you can do is simply take that text, and I've done this a lot, even when I've been working at organizations that have lots of money is to, before they develop a sign and an exhibit which is going to cost them, who knows how many thousands of dollars, is just create a mock up. Like a laser printed version of that sign and you start with level one of prototyping is, ask your family, your friends, your roommate, anyone who doesn't have knowledge of that content. Have them read that sign, let you know, is there anything that wasn't clear in here? And that would all of us so much time and money. Because I think often, we create things without having done that front end research to try and figure out, is this meeting the need?

And so there's a lot of great resources online about how to do prototyping, including online prototyping as well as in person. But I think there's different ways to get feedback from folks on interpretive text that really... Think of how much time we spend on the font and the images and... But you could spend in two hours, you could take that out to a park that you might be looking at installing the sign in. You could stop 20 people and get their feedback and you would have a better sign. Projects like that, you'll Find within 20 or 30 participants, respondents that you'll start to see trends. And if that's what you can do, you don't have money to hire some firm to do a prototype formative evaluation for you, but you can do that on your own and get great information.

So I guess that's what I would hope to leave people with, the idea that it's not all or nothing. It's not randomized controlled trial or nothing. But there are many ways to get valid data from relatively low key, relatively non labor intensive methods that can still tell you a lot about your programming or the interpretive content that you're creating.

Tania Marien: This has been fantastic. Thank you, Kathryn so much for sharing-

Kathryn Owen: Oh, sure.

Tania Marien: Your knowledge and getting, definitely me all excited and getting our [inaudible] excited and enthused about their next projects and what they're working on. So thank you so much for your time. Absolutely brilliant.

Kathryn Owen: Yeah, I really enjoyed it. And I've not gotten involved much yet, but I really appreciate it seeing your messages every week and really I'm supportive of what you're trying to do in terms of helping this community become, more so, more of a community and supporting each other. And this is such an important time to be doing that in for a lot of reasons.

Tania Marien: Thank you, Kathryn. Thank you. I appreciate it. Talaterra is a podcast for and about independent educators working in natural resource fields and environmental education. If you enjoyed this episode, please share it with friends and colleagues. Thank you so much for joining us today. This is Tania Marien.