Masterclass on the applications of high-content imaging in 3D models of disease

In this on-demand webinar, Dr. James Evans describes how he generates imaging data from physiologically relevant cell models.

3 Feb 2022
Rory Shadbolt
Publishing / Media
James Evans, CEO of PhenoVista Biosciences
Dr. James Evans, CEO of PhenoVista Biosciences

Generating translatable high-content imaging data, including 2D and 3D structures, is extremely valuable for drug discovery and preclinical research.

In this expert SelectScience webinar, now available on-demand, Dr. James Evans, CEO of PhenoVista Biosciences, presents case studies on how Yokogawa’s Benchtop High-Content Analysis System can improve throughput and standardize processes for complex 3D cell-based phenotypic assays.

Watch on demand

Read on for highlights from the Q&A discussion and register now to watch the webinar on-demand

What plates do you use for confocal imaging of live and fixed cells and organoid slices?

JE: Those are hugely important. So we typically go with Greiner as our manufacturer. I'm happy to share catalog numbers if you reach out to us. The biggest driver for us in terms of selecting plates first off is, can we get high-quality images? Closely followed by, how consistent is the geometry of the plate in terms of flatness and terms of well spacing? Because that will impact the automated focus ability of these platforms. Then, supply chain issues, you've got to be able to get those plates. We've been working with Greiner over the years almost exclusively.

How did you quantify phenotypes?

JE: There are cases where an algorithm will pull something out that your eye doesn't detect, but at this stage, the rule of thumb when we're developing assays is: can you see a hint of something happening there by eye? And, is it consistent? If you can, then an algorithm is going to be able to tackle it.

Of course, there are AI approaches, and there's still a lot of development, and we're involved in that too, but AI is only as good as its design, which can often still be outperformed by humans. So we use a range of different 3D segmentation tools. I mentioned CellPathfinder. There are other solutions out there from other microscope companies that we use as well as third-party software packages. Imaris is something I've used throughout my whole academic career and also a Dutch company called Huygens, which has nice 3D segmentation analysis.

We have a whole toolbox and depending on the particular questions a client is asking and the peculiarities potentially of the biology and what we're trying to measure, we'll use the right tool for the right question.

For the 3D assays, do you need to perform 3D voxel segmentation or is it enough to use Z-projections?

JE: If we can get the answers with a Z-projection like a maximum intensity projection, we will, because that's going to be a lot more scalable. And so we do 3D through voxel segmentation when needed. And often, it's not a black and white answer. Maybe we'll get some summary data using the projection. And then from the hits or the areas of interest and late in the study, a client might direct us to say, "Okay, let's just delve in deeper to this particular area and get some 3D tracing of neurites, for example."

There’s a middle ground where it might take a partial projection. Some of the issues can be when you've got a dense culture. Often the problem is that the density of the objects is so high, but when you project it in 3D, it's almost indiscernible. So with those caveats in mind, it will be a combination of projections in true 3D that we use.

Would you discuss your ability to measure macrophage phenotypic M1 or M2 status within a tumor fragment or metastatic cancer cell cluster?

JE: That's a very specific question. Thankfully I have an answer. We have performed M1 and M2 assays several times for clients over the years. What we haven't done is explore that in a 3D environment to date. It's mostly been co-cultures in 2D settings. We certainly have the building blocks to do that, we just haven't done it in 3D.

What is the proportion of live-cell versus fixed-cell assays in the high-content imaging projects you work with? And what is lacking for making live-cell assays more popular?

JE: My background is in doing a lot of live imaging and 3D timelapse imaging back in academia, and it's very time-consuming. There are two main limitations to doing it. One is the number of conditions that you can get through per unit time. You spend a lot of hours on microscopes, especially trying to achieve a 3D timelapse. 2D timelapse is a bit more automatable, and there are certainly platforms that do that, that we have.

But there's no getting away from this: there are two main counterarguments to any live imaging. From a commercial perspective, one is the availability of reagents and dyes that you can use in a live setting. The second one would be time gap instrumentation, where you could be getting data at a higher rate using fixed inputs. The third thing is always the worry that the live dyes might be interfering somewhat with the biology. So those three things are always the counterarguments to doing live cell experiments.

What we tend to do is use live-cell experiments maybe during the early optimization phase to figure out the kinetics of a process, to get a sense of what kind of scale is happening in a handful of minutes, then you probably are going to need some live-cell imaging. But if it's happening over hours, then you can take a serial fixed endpoint, that often lends itself to getting better data and a higher quantity of data also.

So our proportion of live versus fixed is probably only about 10% whereas the rest of the work we do is live.

To learn more about high-content imagery of 3D models, watch the full webinar here>>

SelectScience runs 10+ webinars a month across various scientific topics, discover more of our upcoming webinars>>

Links

Tags