SCIENTIFIC COLLECTIVE and ARTIFICIAL INTELLIGENCE
Roundtable on the AI Revolution in Science
The Roundatble discussion continues in the Forum.
To expand the cellcomm.org AI discussion and encourage participation of early career scientists, Zinia Charlotte Dsouza and Cristiana Dondi, the chairs of the SPB-Science Network (the society of postdocs and students), along with Guy Salvesen and Giovanni Paternostro have hosted a Roundtable on the AI Revolution in Science at Sanford Burnham Prebys in La Jolla, on October 22, 2024. Invited speakers were Giorgio Quer (Scripps), Talmo Pereira (Salk), Sanjeev Ranade (SBP), Karen Mei (UCSD), Ani Deshpande (SBP), Will Wang (SBP) and Sanju Sinha (SBP). The Roundtable was well attended by scientists from multiple San Diego Institutions, which filled the Fishman Auditorium.
Giovanni introduced the roundtable by mentioning the ongoing discussion about this topic on cellcomm.org. He reminded the audience of the interview and visit by Renato Dulbecco (Nobel, 1975) in 2009, who told us that he had noticed a marked change in biomedical science communication over his lifetime, from complete sharing of plans and ideas in the small molecular biology community of the 1950s to later secrecy and excessive competition for limited resources. Dulbecco pointed out that this was an inevitable response to changes in the world of science. The earlier openness was facilitated by mentors known and respected by the community. The changes in science we will see as a response to AI mean that communication habits are also likely to change and adapt to the new challenges.
Giovanni also shared a question that had emerged from previous discussions and surveys and that had been addressed by recent Interviews on the cellcomm.org website:
What could be achieved if there was a public or nonprofit AI effort with the same scale and level of funding as the current large private efforts? What would be the benefits for society?
This question can only be answered by a wide and open sharing and integration of ideas by the scientific community. Many comments on cellcomm.org have pointed out that, if the most advanced science were to be done only in the private sector, the lack of transparency will decrease trust in science, support by government and the public for academic research will decline and society will not be able to fully benefit from the great opportunities provided by AI in science.
Guy pointed out the advantages of mentoring relationships for both mentors and mentees. He suggested that one-to-one meetings with mentors and collaborators can generate new ideas and motivate sharing them. He explained that two mentoring sessions, by himself and by Brendan Eckelman, a very successful biotech entrepreneur, were going to be offered in a raffle at the end of the roundtable. This was meant as an introduction to a strategy for the scientific community to self-motivate a wide debate about AI, by rewarding contributors with connections with mentors or collaborators. Many more potential mentors have expressed their willingness to participate.
The invited speakers started by making the following remarks:
Giorgio Quer:
I am Director of Artificial Intelligence and Assistant Professor at the Scripps Research Translational Institute. I will mention two applications of AI in medical research. The first one is the use AI for the prediction of atrial fibrillation. In atrial fibrillation the heart beats in a very irregular way. It is a condition with many clinical implications. It is intermittent, so it's not always there and this makes it hard to detect.
We started looking at features on a single lead ECG, an electrical signal from the heart, to see if AI can decide if a person is at high risk of atrial fibrillation even if the condition is not present during the recording. This would benefit the patient by suggesting more extensive monitoring and appropriate interventions. We were able to successfully do it using deep learning models developed with a large sample of 400,000 participants.
The second application, which we described in a recent article in The Lancet Digital Health, and it is still at the development stage, is even more advanced. Many clinical data are used: Electronic Health Records, 12 lead ECGs, MRI, retinal fundus images and more. All these data can be integrated using transformer models, multimodal AI, and large language models, to give a clinical prediction and potentially also to explain it to the patient.
Talmo Pereira:
I am a Fellow and Principal Investigator at the Salk Institute. I started my lab just under three years ago, and we are a fully computational group. Our core area of expertise is AI and deep learning. A lot of our work is applied to neuroscience.
The backbone of our work is a set of open-source software tools that leverage deep learning, in particular a set of techniques from computer vision, to do what is called markerless motion capture. We use deep learning to essentially extract the location and movements of any kind of body part, any kind of biological entity that moves, that has a skeleton or not, and we use these dynamics to extract meaning from videos. There are many applications to phenotyping and to behavioral studies. We are very committed to open-source software development and tool building. Our core software tool is called SLEAP. It has been used by 18,000 users around the world, from 66 countries, with over 100,000 downloads.
Our phenotyping work includes smarter home cages that do an automated 24/7 longitudinal phenotyping, which we are using to characterize behavioral biomarkers of diseases like Alzheimer's and pediatric cancer. We also work with plant biologists to apply our software to characterize the root system architecture of plants, in the context of the Salk harnessing plants initiative. Our goal is to help to engineer plants that have enhanced carbon sequestration capabilities, and part of this is just quantifying the effects of different genetic variants, which we do using our computer vision techniques.
And finally, a new area that my lab has been entering of late is the emerging field of embodied neuro AI. Here the goal is to build very realistic digital twins of our animals, where we can leverage the motion kinematics of the behaviors of the animals. We can then create biomechanical realistic bodies of these animals in simulations.
Think of these as video games, and we train our artificial neural networks, that are structured and wired up like the brain, with the objective of then getting to maximize the score, where the high score in this video game is to imitate the movements of real animals.
Once we have a digital brain, we can move a digital body, and we can probe its digital brain and simulate the kinds of experiments that we would do in the lab, to help generate, validate, and integrate hypotheses that would otherwise be intractable to do experimentally at scale.
I'll just use my bonus time to mention that we are very actively hiring, pretty much across all levels, RAs, post-docs, software engineers. Please contact me if our work interests you.
Sanjeev Ranade:
My name is Sanjeev Ranade. I am a new Investigator here at SBP. I started in January.
I did a postdoc at the Gladstone Institute with Deepak Srivastava. And I collaborated very closely with Professor Katie Pollard, who's an expert in applications of AI and machine learning to genomics.
I do not have the same level of AI expertise as some of my colleagues here. And actually, I think that for many of you guys in this room, that's a good thing. A lot of you are probably just doing basic biology, asking questions and wondering, what is AI supposed to do for me? How am I supposed to use this in my own lab?
And I think the answer is obviously to be inspired by all the folks that are presenting here. It's amazing to hear about some of the algorithms for unbiased imaging of animal behavior. I worked on mouse physiology. Everything that we did was user calculated and it's well established that being in a room with animals and making these measurements affects their behavior. Now we can eliminate that aspect. And the fact that so much of this is open source is also amazing.
Most of you are probably just basic reductionist biologists who have been doing western blots and other things with cell biology. Now it's all open source, and you can just go to GitHub and download all these Python scripts. But what happens if you don't know Python? The answer is that there are extraordinary tools out there to bridge this gap.
And I think that all of us who are doing wet lab biology, realize that there's this massive tidal wave of AI coming our way. How can we deal with this tidal way? The answer is for us to start learning dry lab, start learning computational methods and techniques. And there are so many tutorials that are out there to do that.
I'm going to finish by answering the question that was posed, as to what could be achieved if we had this massive public AI effort? Besides the Human Genome Project, we could think about the example of ENCODE. In this consortium large labs are coordinately contributing massive datasets that are openly shared.
A public AI effort would be amazing, but we need to define its scope. We do not need to do everything the private industry is doing. As we learned from ENCODE, we should solve problems of data uniformity, and we should have the resources in-house to be able to analyze those data. Think of public AI the same way you think of going to the UCSC genome browser. If we could get something defined and actionable and understandable for an individual lab that doesn't specialize in AI, it would be transformative.
Karen Mei:
Hi, everyone, my name is Karen. I'm a project scientist in the Yeo lab at UCSD. As many of you may know, our lab specializes in RNA biology. We have developed many different assays that investigate and understand many aspects of RNA dynamics.
We have recently embarked on using AI to better understand how RNA is regulated and how we can really leverage this understanding to better design therapeutics for medicine.
What I've been working on recently is building a large language model for RNA. You can think about RNA nucleotides as you would about words in a sentence.
And what we have been able to do is train a large language model to predict nucleotides that are masked so that if you hide a nucleotide in the RNA, you can get this model to guess it pretty accurately. This is an indication that this model has not only learned what an RNA really is, but it actually understands the structure of RNA and can tell you what the RNA should be.
Using such a model, our current goal is to predict RNA localization. We fed it many, many different types of RNA localization data, and we have now fine-tuned the model so that you can predict with pretty high accuracy where the RNA would go just purely based on its sequence.
And after optimizing this predictive accuracy, we are then able to do a study called occlusion mapping, where we delete certain parts of the sequence, so then we can now ask what part of the sequence is important for driving this localization.
Using this type of pipeline, we can really understand how the sequence is driving different types of RNA dynamic, and we can also better understand what part of the sequence is important for this particular function. Many people assume that RNA localization is just in the 3' UTR because of both biological and computational limitations.
But using something like the large language model, which is specialized for studying long sequences, you can now look at much, much longer sequences, up to 35 kb.
In conclusion, we're using this same pipeline to study many different other parts of RNA dynamics, such as splicing, protein interaction, structure, expression, and translation.
And the ultimate goal is to master this knowledge so that we can design a specific RNA tailored to different functions, and we can design better therapeutics.
Ani Deshpande:
Like Sanjeev, I don't know why I am on this side of the room with the experts, rather than on that side of the room. I think he gave half of my talk. Sanjeev described this as a tidal wave. I would call it a tsunami.
What I tell people in my lab and others is that if you see a tsunami coming, depending on where you are, you may just get your feet wet, or you may just be completely drowned. But not knowing that the tsunami is there and where it's coming from might be a mistake. That's why I have been paying attention to this like many of you for the last two or three years. And I try to understand what's going on and then how you can utilize this in your own scientific life.
What I can tell the trainees here is that AI will seem daunting and difficult, but there are ways in which it actually makes your job easier. The way I look at it is that there are three ways of directly implementing AI-based tools in your own work.
The first one is embellishment - to look good. Think of sunglasses that you use just to look good. You can use ChatGPT, Copilot, and all these tools to write your emails (and edit your text). Suddenly there are trainees who are writing emails that have really great phraseology. And I'm like, wow, that's cool. So you could use tools for debugging and annotating code.
The second way to use AI – is enhancement. Instead of sunglasses, some people use glasses not to look good, but to see better. If you're using glasses like bifocals, then you're using it to overcome some weakness. What do I mean by that? If you don't know how to code - like me - you can now very quickly just start coding. And I was kind of really surprised how easy it is using ChatGPT, to actually get it to provide code and get it to do data analysis.
The third way to use AI is not just to look good or see better, but to look beyond. When people made telescopes and microscopes, it didn't just improve vision. It allowed you to look at places never looked at before. Most people are going to use AI for the first two ways, to look good or to see better. But remember that the ability to look beyond is what is going to revolutionize research. For example, consider the Nobel Prizes that were just awarded, for technologies that can help design completely new proteins from scratch.
The last point I would like to make is that while this is incredibly exciting, it is also scary. When recombinant DNA and CRISPR and all such disruptive and potentially risky technologies came out, gatherings of experts were planned that discussed risks and proposed guardrails. And I think this is a discussion that needs to be had, and I'll end there.
Will Wang:
I am an Assistant Professor at SBP. I started here about a couple of years ago. I'm in the Development, Aging and Regeneration Program, and my lab uses spatial omics approaches to study tissue structure in development, aging, and regeneration. And we sprinkle AI throughout that whole process.
I started out as a wet lab biologist, a stem cell biologist. I came into Stanford when this AI revolution was really just kicking off, and my postdoc mentor was a part of a Nature publication on deep learning, where they classified melanoma, in a collaboration with Sebastian Thrun. And that paper got cited, I think, over 2,000 times in two years.
So it dwarfed some of the other discoveries that were happening in the lab.
There are several approaches and applications of AI. When we think about what we can use AI for, there are applications. To accelerate your biology, to accelerate a classification problem, to accelerate the whole process of quantifying data, understanding some of that structure.
There's also the aspect of creating. You heard what Ani said about ChatGPT, protein design, and all the generative AI now.
But also, it can be used to discover things that we've never seen before, to understand fundamental mechanisms of biology and physics that we have not been able to capture because of basic statistical limitations or because of the limitations of our brain. I can't remember a sequence that's 37,000 base pairs. If there's no way I can memorize that, I can't find patterns within that.
And really, these tools are here to revolutionize and model how things are regulated in a complex manner. We can interrogate these models now to understand how the physical world actually works and how the biological world actually works. The future will be to take these tools to model complex problems and really get an understanding of what's actually happening in our complex bodies, in our complex cells.
I was very lucky in working with Anshul Kundaje, who was the head of ENCODE at MIT, in taking deep learning approaches to understand how chromatin and accessibility can be modeled, and then you can interrogate these models to say: wow, these are the base pairs that matter. This regulatory element gives rise to this gene expression.
And you go from that, using complex data, like images, genomic sequencing, to say:
this is the hypothesis I need to test and design a therapy for. And that's my view of what we can use AI for.
How we do that in the lab is really looking at tissue-level spatial regulation of cells, but also at how different elements in the genome can regulate gene expression. How do you control that, and how do you understand it better?
Sanju Sinha:
Like Will, I'm an Assistant Professor here. My background is in bioengineering and computer science. For my PhD, I was developing AI models for a lot of large datasets, like omics data, and asking really fundamental questions relevant to single-cell-based precision medicine and drug discovery.
And what we stumbled upon is, as the biology is becoming more complex, the spatial data really ended up helping us quite a bit, almost at the end of my PhD. We developed some AI models that can be very helpful from routine biomedical images, like pathology images, especially when they are built together with omics data.
Almost two years ago, we developed a method that we are still trying to fully understand. Just to tell you the complexity of method.
In my own lab now, we are developing models that integrate routine biomedical images, like pathology images, and omics data, with novel kind of methods that take into account all of them, and can develop insights that other methods, as Giorgio introduced, cannot learn.
Regarding the open-source or non-profit developing models, I want to say first that if a non-profit develops really good models, they will become for-profit, if you know the OpenAI joke. But I believe that what is more important than even a non-profit approach driving these models is an open-source approach driving these models.
If you are developing these models, care about others using it. It will come back and help you. Look at the Facebook model. They put out their code with the idea that it will change the economy so much that the money will come back to them.
After the initial remarks the following questions were asked:
Theophilos Tzaridis (postdoc):
Many thanks to the speakers. I am not an AI expert, but I think we had wonderful talks. And I guess one of the possible questions about AI was mentioned by Sanjeev and Ani. So how do we use it and what are we to do? We would appreciate as trainees any workshop we can get on that.
Talmo:
As a developer, and somebody who builds AI tools for biology, I often sit in these types of events, often with other folks like me, who will very readily go towards the argument that we need to improve quantitative literacy. We don't have very good scientific programming classes. You go into computer science; you learn how to sort a list. But that's not very useful for learning how to run a large genetic model.
To an extent, it's true that everybody should have access to this kind of education, and it should be to some extent accessible, maybe more accessible in biology. But ultimately, it is the role of the computer scientists, the computational biologists, the folks who are building these tools, to make them accessible, to make them useful for biology.
You wouldn't ask a molecular biologist to learn how to craft the lens in order to use a microscope. It is important that, as we put out technology into the world, we come up with a plan for making it useful and accessible. That is the second half of the job.
We ran the math, we ran the code, but next we need to also make sure that it's usable.
Usability means put a GUI in front of it, put a user interface in front of it so that it's immediately accessible without creating barriers to making it useful and immediately applicable to science. You make it easy to install. You maintain your dependencies. You attempt to answer emails from your users.
Nonetheless, one of the things that I feel very strongly about conveying is:
don't feel daunted. Don't feel that just because you don't know Python or the mathematical foundations of deep learning that this stuff isn't for you, that there's a tsunami coming and feel that it's overwhelming and impossible to access.
Pretend more of your computational colleagues. Raise the standards for computer scientists. It's way too easy for us to publish. It's too easy for us to put out an algorithm and then make a couple of cool videos, which we saw earlier, and call it a day. In order to do good science, we need to make the technology accessible to others.
It is also important that we fund that kind of work, so that this is a level of translation of the basic technology development to make it usable for biology. And we should also, of course, be welcoming. As people begin to use the tools, they'll begin to be more curious about how they work and figure out ways to improve them. But the first step to doing that is lowering the barriers to entry.
Giorgio:
I fully agree that these tools need to be made differently and be more accessible. I personally think there is a little bit more than just developing better tools and becoming a little bit better in using the tools.
I think a dimension that we should really explore in the future is a deeper collaboration; it's difficult to create a tool that can be used in general. We need to develop a tool together, and so we need to have the scientists with the subject matter expertise working really together with the computer scientists
If there is one thing that I would really wish for the future is that it would be easier to get funding which will facilitate collaboration, such that we can work together in developing the tool that is really needed for the specific question that the subject matter expert requires.
Ani:
Nisha Cavanaugh asked me to do a ChatGPT tutorial for biologists here. I didn't know whether I should do it, because if any computational person was sitting in the audience, that would be embarrassing for me, but thankfully none of the people who came were computational biologists, so I could say whatever I wanted.
Most of the people who came there, their jaws dropped after hearing what you can do, just using basic tools: turning papers into podcasts, summarizing a lot of data very quickly, making figures, and so on. Tell that this is the figure I want, and it will take care of how to make this. These kinds of things are very easy to do now, they are not daunting, and you should do more of these.
The other thing is not to think of AI as an autocorrect on steroids. People are using it like a Grammarly, which is good, but there are many more things that you can do, and it's very interesting to learn them.
Sanju:
Just to build on that, I'd like to really encourage you, all of you, if anyone of the fellows wants to learn coding, to use these tools. This is the best time to do it.
There has never been a better time to do it.
Does anyone know what is the hottest coding language right now?
English.
Really.
These language models have overcome a particular gap that a lot of you were just thinking about but were not precisely articulating.
The idea is: why are we talking? If we talk to another human, we can talk about logics, but in coding, we could not do that because of the syntax issue; now you can overcome that.
On that note, I will tell you that we are structuring a course at SBP called AI Assisted Modeling and Programming, which will be offered in the next quarter or the quarter after that. We are just trying to finalize that. Me and Lukas, we will do that. We will start with the experimental version with a few individuals, like 10 or so, and we'll build on that.
Will:
I came from that boat. I was a wet lab biologist. I learned some basic programming when I was younger, but then we started with the application of what other people had created. And it didn't work on my tissue. It didn't work on my data. It was not something that is generalizable.
This is the thing: you have to understand that some of these models cater to specific input data. They cater to specific tissues and cell types. And it's going to take some work to get it working for your tissue or your cell type or your data set.
One of the things that happens, very much related to how we translate biology, is that it's expected that you publish some of these things. You publish the algorithm, and then the next step is to commercialize. There's a lot of effort that needs to be put into obtaining funding. There are a lot of expectations, dealing with the users. And there's no funding for that in the academic world. What happens is that you really need to take this to a company. But when you do that, the focus of those applications is going to shift.
Are you going to make money segmenting cells? Probably not, but some think you might. My friend has a company. When I talked to some of the authors that had the melanoma diagnosis paper, they said they tried to start a company. And what happened was that they had funders, the investors were interested.
They asked: how do you make money from this? Well, anybody could take a picture and put it through this algorithm, and it will tell if it is cancerous or not. Then what happens is that they have to go to see a dermatologist. Because the dermatologist, at the end of the day, still has to look at that mole. And the company and the investors were not willing to take on the risk of somebody having a cancer and not going to the dermatologist. When you have this kind of problem, for academic research purposes, it works. But when you try to commercialize it, that's a gap,
For some of these things, we are doing great for research application, but, when we try to translate computational tools, there is a gap and the same valley of death that happens when we try to translate medicine.
Bastien Cimarosti (postdoc):
I have a question about the exploitation of AI in single cell genomics. Recently we have seen the first foundation of how to analyze single cell data. There are models that will take a million, or even 50 million human single cell data.
Ideally these models would understand some kind of fundamental knowledge about the cell. At the moment, however, we can use these models for prediction, but we still cannot really learn from them basic knowledge. I was wondering if you think we could achieve this kind of machine-originating knowledge.
Sanjeev:
My pessimistic, sort of wet lab-driven view on this is actually cautious. What was said earlier about having groups where the wet lab and dry lab folks and engineers are all sitting together working together, that's going to be absolutely key. My view is that if you want AI to give you a hypothesis, to form a hypothesis for you and then answer it, it's not going to.
At the moment many of these models are still at version 0.1. Will it be amazing in another five years? Absolutely, yeah, in terms of prediction. If you look at the human cell atlas, it can help you to establish what your cluster of cells is in your data, but even that is not yet working so well.
It is going to be a case of iteration between the actual experts in biology and the computational experts. We are not there yet. But maybe we should not even expect AI to do all of our work for us. Sanju, what is your point of view about this?
Sanju:
I'll try to make it brief. We are at a very early stage in foundation models for single cells. Their performance on some tasks is very limited right now. So maybe we're not yet ready to really make a judgment.
Leslie Boyd (Core manager):
I run the cell imaging core here at SBP. I have a few questions. We have talked about how we get the scientists to use AI in a kind of grassroots explosion. I like the open-source model, but we all know that nobody likes Linux, because they have to learn it. And so, what happened? Microsoft came in and Apple came in. The open-source became commercialized.
How do we then allow the normal person to learn AI in an easy way? Can we push this down to the middle school, elementary, high school kids, not just have PhD people who are going to move this forward?
The second part of this is what will be our role? AI cannot yet do context so well, even though it can translate everything into everything else. It can write code for us, but what do we do when AI is good enough to replace us?
Talmo:
For the first part, I feel very strongly that folks who develop AI should also be responsible for making it accessible. It doesn't have to be necessarily everyone. I do think it should be folks who are embedded with subject matter experts.
But to your precise point and to some of the concerns related to funding and the open-source model and so forth, I think it is possible. Our lab is proof that this is possible.
We have millions of dollars of NIH funding specifically to make our AI technology more accessible.
And to that point, a couple of months ago we went to a local middle school, and we spent the whole day with sixth graders. We had to talk to a hundred different sixth graders, all in the span of about 25 minutes. They learned how to get on their Chromebook.
We put all the engineering work to make it so that our AI software with its nice accessible front end would show up on their middle school assigned Chromebook. They could click, train the AI. We popped images up in front of them. They clicked on where the ground truth locations were. They trained the AI because we made it so that they could leverage cloud-based resources. And by the end of the class, they were all competing to see who could make their AI the smartest. It is possible.
I think the NIH is very aware of the potential and the importance of making these kinds of tools accessible. And increasingly, there is more funding available for doing this kind of, essentially, engineering work. What we should not lose track of is that it is important to always be aligned to the biological goals. We don't want just to do engineering for engineering's sake. We want to make these things accessible so that we're achieving new biology.
And to the point of: what are we going to do when it gets so advanced that it replaces us? I just think it never will.
Going back to the previous question about the single cell foundation models, I would like to mention scGPT, which is quite remarkable. It is a 2024 Nature Methods paper. I think it's one of those that really shaped the field. They fine-tune a foundation model, which they pre-train on millions of cells. Then they fine-tune it on a smaller set of perturbation data so that they can do what they call in-silico reverse perturbation. It is quite exemplary of the importance of having an experiment to AI technology development loop.
It will make predictions grounded on data. It will generate a hypothesis that you can test, go get that data, then use that data to fine-tune the model. Now, it is becoming more about the science and data generation and experimentation than it is about the engineering.
What's unique and new about these models is the fact that not only we can use engineering to make it accessible, but that by its design, it's something that enables direct collaboration between the experimentalists, the practitioners and the scientists, and the models themselves.
Erkki Ruoslahti (former SBP President):
There are some diseases where the patient population is so heterogeneous that people don't even want to develop drugs because it would be too expensive to test them in the clinic. It seems to me that AI could help to solve this problem. Is there a lot of work going on in this area?
Karen:
I think that's an excellent question. The hope is that in the long run, once our AI models have been trained on so much data that is in the public space, they will be able to make predictions on the more heterogeneous, rarer diseases, and that will hopefully help us design experiments and tailor better therapeutics.
As far as how to translate this clinically, I definitely still see a gap because even if we discover that a particular therapeutic might work for a rare disease, there has to be some funding to actually be able to enable clinical trials for these rarer diseases. But the good thing is that the AI model might be able to generate more specific hypothesis that we can test.
Erkki:
Are there any people working on that, that you know?
Karen:
I have seen start-up companies where they have platforms that they market as being able to select drug candidates and then do high-throughput screening for rare diseases. The idea is that these rare diseases might have some kind of convergent mechanism so that even if the patient population is very small, they all have some common mechanisms, and the population might effectively become larger.
At the end of the Roundtable, it was decided that the discussion was going to be continued in the Forum on cellcomm.org.
The raffle for the mentoring sessions with Guy Salvesen and with Brendan Eckelman was conducted by Zinia and Cristiana and was won by two delighted early career scientists: Theresa Slaiwa and Joseph Rhodenhiser.