Clinical Outcome Assessments (COA) – Diving Deeper, Applying Scientific Rigor and Good Instrument Design Webinar Replay
The bedrock of a successful trial is a well-designed protocol with reliable measurements of biomedical and health-related outcomes to satisfy regulatory standards. There is a knowledge gap between the field of clinical trials and other relevant scientific disciplines on fundamental methodological principles that should govern the collection of clinical outcome assessments (COA) data. This webinar takes a deep dive into how scientific rigor can be applied to COA even though they are subjective. The featured speakers will synthesize the scientific practices underpinning good instrument design and COA data collection methods and recommend next steps for addressing data collection challenges.
Watch this webinar to learn more about applying scientific rigor and good instrument design to clinical outcome assessments.
Key takeaways:
- Best practices to use when designing an instrument, especially in the context of homegrown diaries
- Characteristics for well-designed data collection instruments such as structure, content and presentation
- Strategies for instrument design that improve the quality of the data and ensure the data meets regulatory standards
- The pitfalls of poor instrument design
Transcript:
0:00:06.0 Ryan Muse: Well, good day to everyone joining us. And welcome to today’s Xtalks Webinar. Today’s talk is entitled Clinical Outcome Assessments, Diving Deeper, Applying Scientific Rigor and Good Instrument Design. My name is Ryan Muse, and I’ll be your Xtalks host for today. Today’s webinar will run for approximately 60 minutes and this presentation includes a Q&A session with our speakers. Now, a webinar is designed to be interactive and webinars work best when you’re involved, so please feel free to submit your questions and comments for our speakers throughout the presentation using the questions chat box, and we’ll try to attend to your questions during the Q&A session. This chat box is located in the control panel, which is on the right-hand side of your screen, and if you require any assistance along the way, you can contact me at any time by sending a message using the same chat panel.
0:00:54.8 RM: At this time, know that all participants are in listen-only mode, and please note that the event will be recorded and made available for streaming on xtalks.com. At this point, I’d like to thank Clario who developed the content for this presentation. Clario delivers the leading endpoint technology solutions for clinical trials through experience gained from over 19,000 clinical trials delivered in support of 870 regulatory approvals. Through Trial Anywhere, Clario has mastered the ability to generate rich evidence across all trial models, decentralized, hybrid and site-based clinical trials. With 30 facilities in nine countries across North America, Europe and Asia-Pacific. Clario’s global team of science, technology and operational experts have been delivering the richest clinical evidence for nearly 50 years.
0:01:44.0 RM: Now, I would like to introduce our speakers for today’s event. Dr. Lindsay Hughes is a scientist and leader with 15-plus years of experience in behavioral and life sciences. She has held key national leadership roles, including advisor within the Obama Biden administrations, global HIV response team, advisor to United Nations leadership and instructor at the Center for Disease Control and Prevention. Dr. Hughes is responsible for a team of scientific experts who provide guidance and analysis services related to the creation of electronic data collection systems for clinical trials. And Dr. Jowita Marszewska is a Scientific Advisor at Clario. Dr. Marszewska has experience with electronic data capture and data management in clinical research. She advises on eCOA best practices, diary and instrument design and training for participants, caregivers and raters. Dr. Marszewska earned her M.Sc. and Ph.D. degrees in the field of chemistry and has authored 10 publications. She has been an advocate for STEM education throughout her career. Without further ado though, I’d like to go ahead and hand the mic over to our first speaker for today, Dr. Lindsay Hughes. You may begin when you’re ready.
0:02:57.9 Lindsay Hughes: Thanks, Ryan. Hey everyone. My name is Lindsay Hughes. I’m the Director of eCOA science and consulting at Clario, and I’m so excited to be here speaking with you today. As you can hear, I have lost my voice as I have been flying around and talking about this exciting topic a lot, so I’m going to be handing over the majority of this presentation to my colleague, Dr. Jowita Marszewska, and I’m gonna be jumping in when I can, so Jowita.
0:03:25.4 Jowita Marszewska: My name is… Hi everyone. My name is Jowita Marszewska, and I’m a scientific advisor at Clario eCOA Science & Consulting team. I have experience with electronic data capture and data management in Clinical Research. I advised on eCOA best practices, diary and instrument design and training for participants, caregivers and raters. Thanks for joining today’s session, which is our deeper dive of applying scientific rigor and good instrument design for clinical outcome assessments. This session covers the best practices for designing your instrument that we recently presented at SCOPE Conference where we got a lot of interest and a lot of discussions. We certainly invite you to submit the questions in the question box, and we will try to address all of those or as many as time permits at the end of the session. So today, we’ll explain the background and context for good instrument design, we’ll talk about research topics in sciences that are relevant to clinical trials, and we’re going to share examples of poor instrument design and how we improve the instrument by applying the scientific principals that we are going to share during presentation. We are also going to share an example of perspective from data management on impact of poor design.
0:05:06.8 LH: So before we go through some of the scientific evidence that we’ve gathered through our education and our experience, we always like to start by sharing a little bit of a reminder to the audience that you all have some expertise, some of you may have a lot of expertise, so you all have some level of expertise in surveys and instruments because you’re the end user all the time. So if you are to look in your email box right now or maybe in your junk folder, there’s going to be a lot of links and emails like this one, where there’s different folks who are trying to collect your feedback on all sorts of things.
0:05:43.5 JM: So let’s begin with an example of survey that we all are familiar with and receive daily. Here’s an example I pulled from my email inbox. When we look at the invitation to these surveys, these usually look similar. The invitation to the survey always mentions why the feedback is being collected, we see the company wants to understand needs and improve quality and improve experience, and that’s why they collect data. They explain how data is going to be collected. They’re going to collect feedback via providing questions and they mentioned… Provide estimates for time of completion, “Thank you for the participation,” and offer a token for survey completion. We want you to keep those in mind while you are thinking about the instruments used in clinical trials. Do you see the same principles being applied there?
0:06:54.8 JM: So talking about the instruments used in clinical trials, what you can see here is the anonymized intake diary that was supporting primary endpoint in phase three clinical trial. We thought it could greatly benefit from application of scientific principles we would be talking about later in this presentation. Please take a minute to read the questions. What would your reaction be when you saw that survey? Would you fill it out every day? When I was sharing this example with our medical writer, she was really confused to see the last question. As she shared with me, she would not know what to answer because the question is very unclear. Would she have to report the number of tablets or the dosage?
0:07:52.5 JM: The diaries that come to Clario eCOA science are often developed by the clinical team that knows the protocol very well. The problem is that patient is not familiar with the protocol and does not know each and every nuance. There is a disconnect between what the clinical team knows and understands and what the patients understands when the instrument is presented. There are scientific principles available to apply to questionnaire design, and these are not rocket science principles. The problem we have identified is that these principles are spread across multiple disciplines and not readily collected within one field, like in the case of rocket science. Clinical trials are designed to measure the estimated treatment effects of the study drug. Because of multi-dimensional nature of clinical trials, there are multiple components that could influence the treatment effect. Collection of data in clinical trial setting tends to focus on the quantitative data. However, clinical trial data is not limited to what researchers can quantify.
0:09:11.1 JM: I am a chemist and can relate to the quantitative part of research very well, but as I’m thinking about the many factors that can influence clinical trial, it is actually very similar to the chemical reaction environment. In a chemical reaction, you can calculate the reagents and when the reaction is done, you’re counting on the yield of 100%, but rarely you can achieve such a yield. I clearly remember when during my PhD, my friend was running the same reaction every month, and during the month of April, the reaction always failed. All the lab mates were trying to understand why the reaction that everyone in the lab was running multiple times a year was failing only in April. After some thinking, it turned out that during this time, the air humidity in the lab was affected by the dry Santa Ana winds outside and that is why the reaction couldn’t proceed.
0:10:15.7 JM: I feel like this example illustrates well, that in a complicated setting, there always will be factors that cannot be easily predicted and calculated, but these will still occur. Apart from quantitative data collection, there are several research topic in behavioral science that we identified as relevant to quad design, particularly bias, faking behavior and people’s perception about the diseases.
0:10:50.4 LH: So just again, as Ryan said, I’m trained as a behavioral scientist and during my training, I focused especially as a mixed methodologist, and having worked in clinical research and life sciences for almost 20 years, I always find myself coming back to that ideal mixed methods scenario. So that’s an approach of mixing qualitative and quantitative items to collect data, being very popular in fields of social and behavioral science as well as in epidemiology. Qualitative data is important because it can provide additional insight into items that could be quantified but weren’t identified as such by the study participants. Additionally, qualitative data could provide insights into future data, to the future direction of research. So for example, we can only get the answers to questions that we ask if we’re asking the right questions, and qualitative data can help us to make sure that we might be able to have open-ended questions that could give us answers to things that we wouldn’t have known to ask.
0:11:55.8 LH: However, as we look at clinical trials in context, we need to find that the mixed methods approach isn’t always ideal. The collection of qualitative data, especially in COA or in PRO, as in patient-reported outcomes needs to be done very carefully to minimize the risk that patients could be sharing information about adverse events that could require clinical care, but there are strategies to help design and run successful qualitative research if researchers want to include such data. So Jenkins in this citation here, identified areas of focus that include sampling, negotiating access to sites and participants, and data collection and management. Bias is an important consideration for clinical outcome assessment design because it can influence the estimated treatment effects of the study drug. Bias is defined as the lack of internal validity or incorrect assessment of the association between an exposure and an effect in the target population in which the statistic estimated has an expectation that does not equal the true value.
0:13:05.4 LH: In 2004, Delgado-Rodriguez and Llorca presented the most common biases occurring at different stages of research. So the first of these being selection bias, information bias and confounding biases are the three main groups identified. Selection bias is an error when the study population does not represent the target population and can be introduced at any stage of the research study during design or implementation. The individuals or groups in the study end up being different from the population of interest, and that leads to a systematic error in an association or an outcome. A good example of this would be the trials where there’s lack of one gender as compared with the other. A confounding bias is often referred to as a mixing of effects, where the effects of the exposure under study on a given outcome are mixed in with the effects of additional factors resulting in distortion of the true relationship, and that’s actually kind of similar to what Jowita explained earlier in her chemistry example.
0:14:12.4 LH: In a clinical example, this can happen when the distribution of known prognostic factors differs between groups being compared. Confounding factors may mask an actual association, or more commonly, they falsely demonstrate an apparent association between the treatment and outcome when there’s no real association between them. Information bias occurs during data collection. Information bias occurs when any information used in a study is either measured or recorded inaccurately, we would expect the information bias to affect data collected using instruments.
0:14:53.4 JM: We see the discrepancy in bias types in particular when designing an instrument. The others also were shown biases that are specific to the field of clinical trials, such as allocation of intervention, compliance, contamination, differential maturing, and lack of intention to treat analysis. This biases are important to consider at the time when the protocol is designed. However, they are not the most prevalent in practice and at the point of data collection, so we identified that there is a discrepancy in bias types identified in literature and observed in clinical trials when they’re running. When the clinical trials are executed, information bias, more specifically, its subgroup misclassification producing biases are occurring during data collection. Recall bias, observer interviewer bias and reporting bias belong to this category. Recall bias stems from inaccurate recollection of the experience in the past, especially common in case control studies. Observer interviewer bias is present when interviewer or observer is aware of the exposure or disease status or hypothesis, and this knowledge influences the interview or assessment process. Reporting bias is defined as sharing inaccurate information regarding research at one point of a study life cycle, such as design, conduct, analysis, presentation of study procedures or results.
0:16:39.9 JM: Blinding, a process of keeping intervention information concealed can be used as a method to reduce, if not prevent misclassification producing biases mentioned here. There is a discrepancy in what is theoretically perceived as relevant to the field of clinical trials and what in practice is. This underscores our point that information from the field of behavioral science is not well-applied to the field of clinical trials. Clinical trial data consists of two categories, one big category of data is biomarkers, which are physiologic, anatomic or pathologic characteristics that are objectively measured as an indicator of biological or pathological processes or responses to interventions. The other category is clinical outcome assessment, COA, which describe or reflect how an individual feels, functions, or survives. COA realize heavily on their observational data and self-reported data is usually supplementing observational data. Why? Continuing the topic of biases in the context of self-reported and observer-reported data response biases should be mentioned. One category of response biases focuses on response style where a preference of some response categories is shown by the responder. The responder can show preference to choose extreme or mid-point categories or state agreement or disagreement. Contrary to exhibiting preference the responder might not show any preference, and then this is referred to as careless responding.
0:18:36.0 JM: Another response bias is called socially desirable responding, SDR. This is a more advanced concept as it involves understanding the answer’s content, and then tailoring responses to present a responder in more positive manner, meeting expectations and adhering to social rules. SDR is a two-factor model with two components: Intentional and unconscious. SDR can take the form of over-reporting good behavior or under-reporting bad or undesirable behavior. The tendency poses a serious problem with conducting research with self-reports as this bias interferes with the interpretation of average tendencies as well as individual differences.
0:19:29.1 JM: People’s perception about disease can influence the field of clinical trials. There are two concepts that stand out: Semantic differential for health and self-regulatory theory. Both concept discuss that a view of a disease can influence patient’s medical decision, like for example, joining a clinical trial. Semantic differential for health was developed in 1960s and was the first technique that helped to establish quantitative measure of health-prevalent perceptions that were difficult to voice. The researchers use the scales to measure beliefs about cancer, polio, tuberculosis and mental illness. Researchers were able to establish that views of these diseases differ substantially, and the views of individuals were often reflective of a population, and therefore affected a public response to health programs, preventive and therapeutic programs that could be tailored accordingly. Contemporary psychology of medicine focuses on the self-regulatory theory.
0:20:43.4 JM: For a physician, the clinical data are a component of his knowledge, whereas for a patient, it is simply kind of news. In clinical practice, medical staff is often unaware that what they think about patient’s disease differs much from the perception and feelings of a patient. Patient’s view of illness, especially the chronic one, changes over time and can be influenced by factors such as their own illness, illness of family member or a friend, education, culture, information from media or other sources, and so on. Variations in illness representation between medical staff and patients may lead to differences in the perceived risk and benefits of taking one action or another when faced with a medical decision.
0:21:44.7 LH: So I think this stuff is absolutely fascinating. And I would have spent many, many days throughout the many years of my life devoted to tracking down information about how these behaviors influence a person’s ability to answer questions, particularly in the clinical trial setting. But if you look at this list of names here, you can see that, perhaps something that’s super, super interesting to me, and that’s my desire to search for all of these different terms which is what would be required to find the level of information and some of the literature and the review that we just presented to you. That’s not practical for everyone who’s involved in clinical trials. So as we mentioned at the start of this, and as we’ll be going through, this talk being one example, so we’re really working to coalesce the information that goes across many different disciplines and where we’re calling these things, these assessments, these clinical outcome assessments, many different things into a kind of unified crosswalked framework for use in the clinical trial space.
0:22:56.8 LH: So these are just some examples of the naming conventions used here to discuss what we’re talking about, which would be an instrument that collects information from a patient, or about a patient in a clinical trial. So then moving on to the next slide, we actually have a poll question. Ryan?
0:23:19.3 RM: Yes, thank you so much. So appearing on everyone’s screen right now will be a poll question. You can participate by selecting on as many of the answers you see in front of you, and then clicking submit. The question that we’re asking is, COA, our clinical outcome assessments, what are the different types of COA? Your answer options are patient-reported outcomes, clinician-reported outcomes, observer-reported outcomes, performance outcomes, or nurse-reported outcomes. We’ll give everyone some time to consider their answer. Again, you can select more than one, and then click submit in order to participate in this poll. We’ll give everyone a few more seconds. We appreciate you taking the time to participate. It looks like most of you have submitted an answer. Thank you very much. Let’s look at where your results have come in. Alright, we have many of you selecting our first three, the patient-reported, clinician-reported and observer-reported outcomes. 68% of you adding in then the performance outcomes and then 35% for nurse-reported outcomes. Thank you for participating and I’ll hand things back to Dr. Lindsay Hughes.
0:24:30.3 LH: Thank you, Ryan. Sorry, I was on mute. Thank you, Ryan, and thank you everyone for participating. So again, COA, our clinical outcome assessments. So there’s four types of these that really differ based on who’s completing them and what they’re reporting and… We’re all right. There’s patient-reported outcomes or PROs, clinician-reported outcomes, or ClinROs, observer-reported outcomes or ObsROs and PerfROs, or performance outcomes. And those clinician-reported outcomes could also be reported by someone else at the health center, as could those performance outcomes or the observer-reported outcomes. So a nurse could count in numerous situations, depending on the other… The questions and the type of assessment. When COA are digitized into an electronic format for use on an electronic device such as a tablet or a handheld, so like a smartphone, they’re called eCOA. eCOA provides real-time data which enhances the ability for clinical trials to use information provided directly from clinicians and patients in diverse environments and it increases the quality of the data and the availability of data for validation and adjudication.
0:25:42.4 LH: It also removes the need for data entry and it creates efficiency and reduces the potential for error or lost data during the data entry process. The first clinical trial that I ever worked on in the early 2000s, I was one of the data entry people who collected the information on paper, and then had huge files and spent hours and hours and hours and hours, just actually entering and double entering and triple entering the data. And that was a major, major cost and a major timeline consideration. So now we’re able to focus on collecting the right data at the right time, the right frequency and the right location.
[pause]
0:26:23.6 JM: There is a knowledge gap between the field of clinical trials and other relevant scientific disciplines on fundamental methodological principles that could govern the collection of COA data. What we see here is the visual map of keywords, which is a result of analysis of scientific literature within the field of COA. What we… We were looking for keywords relevant to instrument design and we could not find any. That further proves our point. There is a disconnect between understanding questionnaire design in the field of clinical trials and scientific questionnaire design. So how can we address this gap?
[pause]
0:27:12.3 JM: Good design of COA instruments it’s critical to the success of a study. A successful clinical trial needs a well-designed protocol and a good quality data. To have good quality data, a well-designed data collection instrument is essential. And well-designed data collection instruments are made feasible by applying established best practices from clinical research and related academic disciplines such as behavioral science and epidemiology. But do you have to design COA instrument for every study? And here we have another poll question that Ryan’s gonna help us with.
0:27:57.1 RM: Sorry about that. Yeah, appearing on everyone’s screen right now is our next poll question. This time, what we’re asking is, what are the types of COA instruments that can be used in this study? Your answer options are validated, homegrown, or homegrown with validated scales. It’s been a very interesting presentation so far and we’ll continue to hear more from our wonderful speakers, but for now, we’d just like to know what you think about the answer to this question, what are the types of COA instruments that can be used in the study? It looks like most of you have submitted an answer, so thank you very much for participating. Let’s look at your results. For this one here, we have 65% of you selecting validated, none for homegrown, but then 35% for homegrown with validated scales. Thanks again, and I’ll hand things back to Dr. Jowita Marszewska.
0:28:50.7 JM: Actually, we’re going to Lindsay now. [chuckle]
0:28:53.8 LH: Okay. So, here what we do see most frequently is these validated assessments, but we actually can see the full range of them. So it was a little bit of a trick question. So on this screen, we’ll explain the difference between all three of those. On the left side, we see validated assessments. So these are assessments that have been made before. They’re usually copyrighted. They’ve been test, re-tested for reliability, responsiveness, validity, they’ve gone through psychometric evaluation. On the right side is the whole other end of the spectrum. Those are homegrown assessments. Homegrown assessments are assessments that have been developed from scratch for a given study, those of all, all have frequently been developed by the study team at the sponsor site, often by someone who’s very, very familiar with the compound under investigation, but not necessarily with the best ways to ask and get information from individuals.
0:29:49.5 LH: And then in the middle is a more balanced, which would be a homegrown assessment that uses validated scales. So that is the scale has gone through tests to validate what is being measured and by whom, but the other questions are made specifically for the study. And this can be good when a study would require multiple validated instruments to be used to answer key questions that could result in patients getting asked the same question over and over again on each of the different forms. So for those of you who are familiar with some of the frequently seen quality of life assessments, for example, we know that we’ll see those often being asked, being used in the same study. Well, some of those are asking the same questions and for the same validated outcomes. For those of you who are less familiar, I won’t name those specific names, but even just think about when you’re at the doctor’s office and you have to fill out forms, how many times and how frustrated you might get when you feel like you’re answering the same question a few different times you might think, “Oh my goodness, are they even using this information?” So what we always look at is to get to the right level of information so that the patient feels respected and also so that there’s not too much noise in the data. I’m going to turn over now to Jowita to share the ten scientific best practices for instrument design.
0:31:11.4 JM: So we’re gonna start with the first best practice, which is the questionnaire is a tool designed to collect data for analysis. Keep the questionnaire as short as possible and include only questions that generate data. Follow these steps to develop a new instrument. Conduct literature review on your topic. Involve topic experts or survey design specialists early in the process. Perform cognitive interviewing and pilot testing. Although there is investment upfront, a study is only as good as its data. These are some of the simple principles for asking questions that yield good data. The questions should be asked using language that’s tailored to the audience. And the order of questions matters. You might have noticed that it’s better any time you’re being asked information, that the very first questions are easier to answer. What’s the date, time? What’s your name?
0:32:20.0 JM: Also, it’s good to place the essential questions up top to make sure that they are answered. Finally, you should work your way to sensitive questions so that the patient feels more comfortable or that they’ve already started to think about the topic by providing the answer to the initial questions. After asking the challenging questions, it’s good to let them come up for air, so to speak. So we suggest spreading challenging questions throughout and returning to the point of audience-tailored language, avoiding advanced terminology, acronyms and abbreviations.
0:33:00.3 JM: Decide whether free texts should be included in your instrument. In a clinical trial context, there is a potential for clinical significant and actionable events to go undetected, but free texts can generate data to help identify content areas or reveal findings that could have otherwise gone undetected.
0:33:21.5 JM: As you are designing questionnaires, you should be treating the respondent with respect. Begin with an introduction of the questionnaire to the responder and thank the responder for their participation at the end of the questionnaire. Being courteous pays off in the form of higher response rate. Achieve the highest response rate by simple layout, consistent formatting and even spacing between questions. These are some of the benefits of implementing your instrument on an electronic device instead of on a paper: Implement your instrument on an electronic device instead of on a paper to obtain more complete and accurate data sets. This is the extension to being courteous. Show the respondent your respect, their time commitment. Present the aim of the questionnaire, clear instructions, navigational guides, date last modified notification or copyright notice, and a progress indicator on the introductory pages of an electronic instrument. Present answer options on an electronic device in the form of radio buttons. One option from the group, checkboxes, when the responder can check multiple answers and drop-down boxes when the longer list of options is available. Make sure none of the patterns or boxes are pre-selected. Ensure consistency among users by testing the device screen compatibility. These were ten scientific best practices for an instrument design.
[pause]
0:35:32.4 LH: Okay, sorry again. So as we can see, it’s really, really important that the science carries through to the design of the solution, because in fact, they are one and the same. These scientifically designed instruments must be presented in a way that is easy for the respondent to interact with and that yields good quality information. So what we’re gonna present really quickly now are some case examples of poor quality instruments and how we’ve gone through and made adjustments to improve outcomes. So in this example, we got a rescue study that had a diary that was way too complex. A patient would not have been able to complete it, let alone completing it every week after doing a complex home treatment.
0:36:21.6 LH: In this example, it was a multi-step at-home transfusion intervention that would have been administered by a caregiver, and the original questionnaire was forty-six questions that had levels of detail that when we asked the sponsor how they were going to use the answers to those questions, they didn’t know. And in fact, it would have been something that was really, really burdensome for the patient or the caregiver to fill out, such as the order in which certain vials were administered when in fact, it turned out that the vials all contained the same thing and they were pre-prescribed doses for the patient. And so that’s information that they would have had ahead of time, where a patient may have been trying to fill out that information and the [0:37:09.1] ____ answer would have been wrong or incomplete and could have led them to not…
0:37:13.0 LH: Not be able to utilize that data. Even though the data weren’t important, what they needed to know was, Did the patient complete the treatment, and then basically how did they feel? What were the reactions to that? So in collaboration with the sponsor and applying the basic principles of good instrument design, we were able to cut the diary down by 65%, including only the information that was essential to the study, and we created a new instrument in only about two weeks and most of those two weeks were spent really negotiating with the sponsor about the importance of just kind of starting over again, because there was really and as there often is, there’s that wrestling with that sort of sunk cost feeling. But this will improve data compliance and data quality regardless of the technology. Can you hit the thing one more time, please? I think there’s… I think we’ve got an act, an animation here. Okay, there we go.
0:38:26.8 LH: Alright, so this is another example that we pulled actually in collaboration with our colleagues from data management, so the example that I just gave you is what could happen. That was a rescue study, again, it had been written and started to be implemented by a competitor and then when they got to the point of implementation, they saw that it wasn’t possible. What’s the worst-case scenario that could happen if you use poor quality data collection instruments? Well, your trial could fail, the billions of dollars that you’ve invested in the development of your trial and of your drug could ultimately not pass muster to the regulatory authorities, you could not be showing efficacy, you could not meet your label claims. And this could occur because the endpoints, that you don’t have carefully chosen endpoints and your instruments don’t measure those endpoints in the way that they need to. That’s the worst-case scenario.
0:39:23.2 LH: Here, we’ve got something that’s like in a little bit of the middle there, so this was caught, this was caught in time, but it still resulted in major delays and a whole lot of work to get it right. So in this example, there was a study where poor diary design, where patients were filling out something kind of like the example that we showed you earlier, where the questions were just unclear, they weren’t sure whether they were supposed to be doing a number of tablets or the milligrams of the dose. And so what they were able to see by doing field monitoring is that patients were misunderstanding this. So fortunately, this was able to be caught, this was able to be audited enough that data changes were appropriate, but it resulted in almost 30,000 data change requests, which is super, super uncommon for a trial within our Data Management Group. At that time, the next three highest studies combined, were 3,500 data change requests. So obviously this is not ideal, and that is also something that they would have to explain to the regulatory authorities because all of this occurs within an audit trail.
0:40:35.3 LH: So utilizing the principles of good instrument design early on, and really considering a lot of these items during the protocol development stage and then during the selection and early implementation really helps as far as timelines and the quality of your study. And then next slide, please. So yeah, that’s it for us. That’s it for our presentation. So thank you very much, and I think I will turn over to Ryan.
0:41:09.2 RM: Yes, well, thank you very much for that insightful presentation. Before we move on to the Q&A session, I’d like to remind the audience to check out the handouts module within the GoToWebinar control panel, where they’ll find some additional documentation related to today’s presentation to download for some further reading. I’d also like to invite the audience to continue sending their questions or comments right now using the questions window for this Q&A portion of the webinar. I’ve already received some questions, so we’ll get ourselves started with those. The very first question that I have for you here, would just like to know why should sensitive questions be placed at the end of a diary?
0:41:51.6 LH: Jowita, you wanna give that a shot?
0:41:58.7 RM: I think you might be muted there, Jowita. I’m sorry.
0:42:02.3 JM: Sure. So, when a patient is completing a diary, as I mentioned in one of the principles by placing the sensitive questions at the end, we’re starting with the easy questions so that the patient might get into the questionnaire. And then we don’t start with the sensitive questions because patient would not be comfortable to answer these straight away in the questionnaire.
0:42:39.3 RM: Alright, very good. Thank you so much for that. And the next question here is a little more involved. They would like to know if you are taking a validated measure and cutting it down for the purpose to make more applicable to the study and data really needed as well as to make less cumbersome to the participant, don’t you need to have it approved through the health authorities first and doesn’t that take a lot of time?
0:43:03.7 LH: I’ll give this one a shot. So thank you for that question. I mean, I think that that is very insightful, that this is something that you need to plan for, although you don’t necessarily have to take this through the health authorities. So if you’re looking at a validated measure, that often means that it’s a copyrighted measure as well. So in certain cases, if you’ve got… So if it’s copyrighted, you need to speak to the copyright holder and you need to determine whether or not your modifications are appropriate and in line with that copyright holder. Depending on whom the copyright holder is, sometimes we have these conversations a lot. And then if you’re looking at the necessity of it being validated, that can also take some time, but then there’s other things to weigh. And those things would be, is this a primary endpoint? Is this a key secondary endpoint? Or is this an exploratory endpoint? And so these are the items that you need to consider differently. If you’re using something for a primary or key secondary endpoint, that means that it’s going to go on the label.
0:44:03.1 LH: That’s a huge part of the hypothesis, and every… The package that the sponsor is going to be submitting to the regulatory authority, and in that case, the use of a validated assessment is very important. And so as a result, also though you would know as the person who’s running a study or in fact a program, that this would be in your pipeline, and we would recommend that you start that process earlier. Whereas if this is something that you would be using as an exploratory measure, you might have a little bit more flexibility, and for example, you might want to reference that validated measure, but then just determine what the right balance is, to then develop something that’s appropriate and realistic for your group in question.
0:44:49.6 RM: Wonderful. Thank you so much for that very detailed answer. I’m sure they appreciate it. The next question here is just curious, what is the difference between COA and PRO?
0:45:01.4 JM: So I’ll take this one, and so COA is the general term for clinical outcome assessments, whereas the PRO is specific to patient-reported outcomes, so these are the instruments that the patient is answering questions to.
0:45:25.3 RM: Okay, very good. And I think I’ve got another question. Same person wondering what’s the difference between COA and eCOA?
0:45:33.1 JM: So the difference between COA is, so these are the clinical outcome assessment, and the eCOA, the E stands for Electronic, so whenever clinical outcome assessments are implemented on the electronic device, we call them eCOA.
0:45:49.5 RM: Alright, wonderful. Thank you for that. For clarifying. Next question up here, it says that I have and I’m studying in school right now for behavior science. They’d like to know how would I get involved in applying to a job in this field?
0:46:05.3 LH: Well, that is an excellent question and very flattering that you’ve asked that. I think you actually have two individuals on right now who could show you that there’s no one path. My PhD is actually in something called Health and Risk communications, but at the time I was also working in infectious diseases and doing a lot of field work. And as we know, Dr. Marszewska has studied to be a chemist.
0:46:33.5 LH: So it is a field where we find a lot of practical experience and an interest in learning, as well as understanding… Understanding kind of the whys of what you’re doing. But I would say look, and then I’ll also let Jowita answer, but I would say, “Do what you’re doing, attend these types of presentations, follow up,” we are always… I try to set aside a couple of hours a month to have informational conversations with people who would be interested in entering the field to talk about the various pathways that you might take and that are available to you to get where you wanted to go. But for what it’s worth when I got into this field, it wasn’t a clear, it wasn’t a clear path at all. So it was a lot of finding things that were interesting and exploring those and then kind of keep going.
0:47:35.0 JM: Yeah, I would agree with Lindsay. If you’re interested, try to explore, go and volunteer in a hospital when the clinical research is done, then you kinda learn the ropes of it. And also network with people who are already in the field to get their perspective and if you’re still interested then you can start applying for jobs.
0:48:06.4 RM: Good, thank you so much. The next question here is in regards to eCOA patient surveys, for example, might require corrections, but they’re curious. Are audit trails recommended for eCOA?
0:48:23.8 LH: Yes, audit trails are not only recommended, but they are required for eCOA for regulatory submission. So it’s good to keep in mind the principles of ALCOA or ALCOA plus, and that is a set of principles to ensure data integrity in life sciences, and it’s used by the FDA, and it includes a range of areas but it’s basically this. So the ALCOA stands for Attributable, Legible, Contemporaneous, Original, Accurate, Complete, Consistent, Enduring, and Available. And so as you can see from that, and if you would want to Google that it’s A-L-C-O-A if you would. So as you can see from that, that the attributable part as well as the accurate would demonstrate the need for audit trails to highlight any changes that were made.
0:49:17.9 RM: Wonderful, thank you. Next up here, someone’s wondering, is diary a PRO or an ePRO?
0:49:27.8 JM: So diary can be both. So depending on when the diary is used, whether it’s a paper diary then it’s going to be a PRO, patient reported outcome. When the diary is implemented on an electronic device then the diary is going to be an ePRO.
0:49:48.9 RM: Perfect. Next up, our next question here. We’d like to know what is your experience with eCOA and more performance-based measures? Do you find you get more pushback? They say that I work with a lot of the performance-based outcome measures and find that clinicians really prefer that paper model for collection of their assessments.
0:50:11.8 LH: You know, at Clario we have… One of our business units is wearables and connected devices, so we’re really focusing on precision motion, and so I was interested… I thought your question was gonna go in one direction, which was about our regulators, but instead it went in the direction of the clinicians using the… Preferring the paper. So I would say I haven’t really heard too much about the pushback with regard to the paper, with the caveat of, we again need to make sure that what we’re asking of the individual is appropriate for the data that we need. So for example, one of the things that we will do, if there’s a complex performance, outcome assessment or if it is a combination that has both ClinRO and PerfRO, but we can do scores-only, and so that way the information that they’re collecting can go into the system and it doesn’t have to be separately entered or at another step, but so that’s an option that we can do to make sure that we’re not asking someone to administer something in a certain order, for example, whereas they could be able to flip around on paper with a really long assessment, such as failing.
0:51:42.9 LH: But again, this is where our science team always comes into, because if for whatever reason the electronic solution isn’t the best solution, we’ll tell you that and we’ll figure out something that does make the most sense. Sometimes it’s, “Do you need to be collecting that type of data?” And then also just to make sure that the trial is as simple as possible for everyone involved.
0:52:08.1 RM: Wonderful, thank you for all of that great information. Another question I have here, I would like to know which platforms do you suggest to use to create online questionnaires instead of paper ones? They further say, “Do we need to create a new app for our clinical trial or could we use and adapt a commercial app?” In either case, all of them must be approved by the ethics committee first, so could you also maybe share your experience with ethics committees about approving or not these apps?
0:52:38.3 LH: Sorry I just need to cough. Let’s see. Okay, so obviously, well we recommend Clario as a platform to create these, to create eCOA. But there are others that are available, there’s REDCap which is an open source one. But you can use an adaptive commercial app. So again, this is what Clario does. So for your clinical trial, we have a stand-alone… Like an end-to-end solution that we have lots of standardized assessments that have already been created, or we can put in customized assessments as well for your clinical trial. And then when you say app, that makes me also think of… I don’t know that we mentioned, but there’s… We have a BYOD solution, and that’s also available throughout the field. So when we talked earlier about the eCOA being presented on a device that could be on a provision device such as… So that would be one to be leased to the study by Clario or on the patient’s own device, and that’s called BYOD, Bring Your Own Device. And so in that case, the app is actually pushed out onto the patient’s own device. And so in that case, our experience with ethics committees has been very good because we are very familiar with this. Actually, ethics committees have just moved away in Europe from requiring screenshots, so it’s actually gotten even a bit… It’s a bit smoother.
0:54:18.8 LH: But just as with anything, it’s just good to prepare early on and to have a partner who is very familiar with this as opposed to going it alone. And we do offer advising at all stages as well. So for example, it might not make sense to develop something like this for a phase one or a phase two study because you don’t have the patient load that would make sense to contract with a group for eCOA, but we can point you in the direction of something that does make sense until you hit that tipping point.
0:54:55.3 RM: Wonderful, thank you very much for that very detailed answer and for all of the answers today. However, we have reached the end of the Q&A portion for the webinar. If we cannot attend to your questions though, the team at Clario will follow up with you or if you have some further questions, you can direct them to the email addresses and contact information that’s up on the screen. I want to thank everyone for participating in today’s webinar, you will be receiving a follow-up email from Xtalks with access to the recorded archive for this event. A survey window will be popping up on your screen as you exit and your participation is appreciated as it will help us to improve our webinars. Now, I’ve also sent you a link in the chat box, and with this link, you’ll be able to view the recording of this event on this page, and you can also share this link with your colleagues, so they may register for the recording here as well. So I encourage you to do that. Now please join me once more in thanking our speakers for their wonderful time here today. We hope that you all found the webinar informative. Have a great day, everyone and thank you for coming.
0:55:52.0 LH: Thank you.
0:55:55.5 JM: Thank you.