Webinar Highlights: Obtaining Clear LC-MS Data for Complex Proteins – Your Questions Answered
Efficient analysis of intact proteins in a multi-user/multi-project walk-up environment
12 Feb 2018LC/MS has been widely used in the biopharmaceutical industry for confirmation of therapeutic protein molecular weights. It is fast, accurate, and relatively quantitative. In a recent SelectScience® webinar, Thomas J McLellan, Senior Scientist at Pfizer, highlights the establishment of a high-resolution/high mass accuracy walk-up solution, to provide intact mass analysis for protein biochemists. An overview of the hardware and software solutions was also outlined, as well as how they were implemented to manage a multi-user environment.
Read on for highlights from the Q&A session. If you missed the webinar, you can watch it on demand here.
Discover the range of LC-MS and separation solutions available from Agilent Technologies >>
Q: Have you tried setting up alerts (e.g. email or text message) from Walkup software when there is a system error such as running out of solvent or the wrong injection device (plate vs. vial) selected?
A: Yes, we have set up alerts on our system. We currently have it set up to send an email if there’s an error on the system or if it’s taken out of administration mode. You can also set up the system to send emails when the solvent level becomes low. We don’t use this function. Part of our daily checks involves looking at solvent levels, so we always know where we stand on that front. Occasionally we will set up alerts for this but if we can, we avoid doing this to avoid receiving a stream of unnecessary email notifications.
Q: You mentioned in the presentation that there are a total of 8 different column chemistries, is there one for glycoprotein analysis? I’m curious because recently I purified proteins from mammalian cells and they tend to have heterogeneous glycosylation profiles. I'm just wondering if perhaps a glycoprotein-specific method can give me better sensitivity/separation.
A: Unfortunately, I think that the column chemistry you use is really specific to the protein itself. If you developed a method that would enable a really good separation on one particular intact glycoprotein, it wouldn’t necessarily work for other glycoproteins. While we have access to a number of column chemistries that we can experiment with, I don’t think there is a universal method out there for this. We would potentially be interested in trying something like this out for glycopeptides, where this would be a much more useful thing to do. We have the capabilities for that. The four column slots on our instrument aren’t set in stone so I can look at different combinations to give my end users as many choices to do their analysis as possible.
Q: Do you ever have any issues with sample quality, correct concentration, no solubility issues etc. given its an open access login? Is there more of a risk of instrument errors from poor samples on a UHPLC rather than a HPLC system?
A: I actually think there are less issues on a UHPLC system than there are on HPLC systems. This is mainly because they give better separations and peak resolutions. Our membrane proteins that we analyze are normally extremely complex, have lots of different detergents in them and there are lots of different proteins in the sample. Because of the resolution you achieve with UHPLC you’re actually able to get more information out of it. If you have co-eluting proteins, they don’t always elute over the top of each other. Sometimes there’s a little bit of a shift and you can actually select a smaller window and you can get data you wouldn’t normally be able to on a HPLC as the chromatographic resolution isn’t there. That being said, as far as my end users are concerned, some of them go to use the system and they don’t have a good grasp of what their samples like. Sometimes they don’t get data because the sample is too diluted or they’re not able to dilute their sample. We like our users to concentrate their samples to over 1 mg/ml then dilute them down to 0.1 mg/ml in water for analysis – this has inherently good success. Data is only as good as samples a lot of the time. There are also times when they concentrate their sample, they concentrate the detergents that they’re using into the sample too. This interferes with data. UHPLC gives a better chance of separating protein from detergents, but it’s protein dependent usually.
Q: The membrane protein spectra you were able to collect, and your success rates are impressive. Was this done on the PLRP-S column? Would you be willing to share what solvent system you were using?
A: We actually used our generic method that we use with the PLRP-S column. This is just 0.1% formic acid and acetonitrile and water. Pretty straightforward stuff. What we really need to know is how the sample was prepared, amongst various other things. You have to take into account, if you’ve followed the rules you will never have a membrane protein not provide a spectra. Whether that spectra is interpretable is a completely different matter.
Q: How often are your non-MS users that access this system coming to you for help interpreting data?
A: One of the benefits of having a network-based system like ours is that a user can’t access the instruments unless they’ve had the necessary training beforehand. We usually do our training in two sessions. For the first session, in the morning, all of the attendees bring in a protein of their choice and then we go through the process of loading and submitting a sample. In the second session in the afternoon, once all of these protein samples have finished running, we sit down and go through interpreting the data generated and how to process it. This training has limited a lot of cases where people would come to back to us with questions. With that in mind, people come back to us is if there’s been a large gap in time since they last did the training, or if they haven’t had to run a sample in a number of weeks and need a quick refresher course. Forcing end users to go through training eliminates a lot of the questions that I’ve experienced in the past on older systems.
Q: Have you implemented standard procedures for sample preparation to guide your users’ appropriate sample conditions?
A: We don’t have a standard operating procedure in place. We generally don’t restrict users from doing anything. We do ask that if they run a particularly dirty sample, or if they run a membrane protein with lots of detergent, that they run a number of blanks after their initial run to clean out the system and get it ready for the next user. Apart from this, the only requirement that we have is that users run a sample that is approximately 0.1 mg/ml. For membrane proteins we ask for at least a 1:10 dilution of their sample in order to break up the micelles of detergent to enable good chromatography of the protein. We don’t put on any restrictions. For peptide mapping we do have an SOP for reduction and digestion of samples and we do also suggest that for membrane proteins, users don’t use large detergents such as Triton or PEG as they ionize much better than the protein of interest so all you will see in the spectra is detergent.
Q: How do you ensure the salts and buffers don’t have a negative effect on the MS isolation? Is there an online desalting step in the methods?
A: We have a delay in all our samples. We have a switching valve in place where the void volume is flushed away so the source never sees that – then we do the analytical run. We hold our gradient for about a minute before we do any chromatography so the whole injection volume can get through the system as fast as possible. You will always see salts associated with your protein, and detergents are also a problem. For a number of complex samples with less than ideal concentration I’ve run on this system, I’ve not had to run any desalting steps This system is so robust that in most cases all we have to do is run blanks after samples just to get the system back to normal. The most we have to do are the routine procedures recommended by Agilent during normal operation.
Q: Can users select multiple automatic data processing methods for one sample?
A: You can’t select more than one data processing method for a sample. All of our automatic data processing methods are associated with other specific sample analytical methods. Using the BioConfirm software, however, users can create their own processing methods for their samples. If there’s something specific that they need to do that the system wouldn’t normally allow, I can create a new method for them as an administrator. I can tailor their experience, allowing them to associate different methods. I would then add this new method to that individual’s list so only they have access to it. You still can’t link multiple processing methods together unfortunately.
Q: Do you need to perform extra system cleaning to accommodate membrane protein analysis? Membrane proteins are usually very hydrophobic and can lead to severe carry-over problems; would you recommend users to run extra blanks after their membrane protein samples?
A: Running extra blanks is a standard practice for us. A user can run any sample up to three times, this goes for blanks as well. If I ran a membrane protein on the system, I would follow it up with two or three blanks. I highly recommend doing this after running membrane proteins due to the detergents and other proteins that are often a part of the samples. I’d even suggest running blanks after any sample if you’re not confident in your knowledge of what you’re running.
Q: What is the sample complexity the system can tolerate? For example, is there a maximal number of proteins that can be analyzed, or how heterogeneous can the protein PTM profile be in order to get meaningful results?
A: Heterogeneous post-translational modifications actually fall in line quite closely with membrane proteins. Both are very complex and often have variable glycosylation patterns. With complex proteins like this you can often go through the chromatogram and find 10 to 14 different proteins in there. This is because proteins elute across the entire analytical space we run our gradient across. If we have a short gradient, then most of our sample will co-elute, giving a spectra that’s hard to interpret. In the case of one of the large proteins earlier in the webinar, we had severe degradation of that protein. I identified almost 20 different products from the degradant. Not only did I see the intact protein, but there was a whole slew of fragments related to that protein. In that case the protein was extremely clean, so I got good results. In that regard it’s very sample dependent. One thing that was very enabling for some of our samples was using the GnTl cell line which has very consistent glycosylation patterns, enabling you to decipher the spectra because it’s not nearly as complex. That being said, there’s a point where you just have too many options with PTMs, making interpreting spectra impossible. You’re limited to how finely you want to go through the data. Automatic data processing software has more difficulty if you have very complex spectra. Most of my users can identify their protein in data analysis manually. In the run with 20 or so proteins I was able to identify them, so I feel the system can handle quite a bit of complexity.
Watch the full webinar on demand, or discover more of our upcoming webinars.