nanoHUB-U Principles of Nanobiosensors/Lecture 2.9: Settling Time First Passage and Narrow Escape Time
========================================
>>
[Slide 1] Welcome back from the small break that we just took. Before the break, we had been talking about how the things that we have learned for biosensors in terms of its diffusion limit, how the particles arrive biomolecules arrive on the sensor surface, that same physics in terms of the diffusion equivalent capacitance can be now flipped on its head and used to solve important or interesting problem in the biophysics itself. So I told you about this issue about mean first passage time. That if you have a protein, and there's escape door, the protein diffuses around within the cell. It gets bumped around on the various cell walls, and, eventually, after a while it finds the escape hatch and escapes forever. Now, there is a, at that point I mention, there's a slight difference between this mean first passage time and narrow escape time, and I want to tell you a little bit about the narrow escape time, and then we will wrap up this set of lectures with the summary.
[Slide 2] So let's talk about narrow escape time. Often, it is called an NET, net, net escape time. You will see it's referred to, and that is different from first passage time or MFPT, which is mean first passage time.
[Slide 3] And the essential distinction is this. That although in both cases you can consider them to be a door of escape, but in this case, the door is sort of a little larger on a thinner membrane sort of, and flux coming from any direction can escape. So that if it comes at an angle, it will be able to escape. Of course, if it comes vertically, it will be able to escape for sure. Now, when you have, if you have the cell a little thicker, or there is a more vertical structure to it, then, of course, you see if you come from the side at a glancing angle, you will not be able to escape through this. It's better, the only way one can escape is more or less if you come vertically because then the particle can escape. Biomolecule can escape. So even without doing any math, I can immediately, we can immediately conclude that here the time will be a little less because you can get out in many different ways, and, here, the time will be a little more. How much more? We'll find out. Now the key idea of, in both cases, is that we'll have to calculate the diffusion equivalent capacitance. And the diffusion equivalent capacitance in this structure will involve the small sphere, which is the gate, escape gate or escape hatch, and a bigger sphere, which contains the biomolecule. On the other hand, here, we'll again have a diffusion equivalent capacitance, however, this would now be a disc, not a sphere, a disc because on the disc, flux can only become perpendicular to the disc when it compare, when it allows capture or escape. And, correspondingly, therefore, the diffusion equivalent capacitance would be between the disc, which is a crushed parabola, you could say, and a longer parabola, a longer paraboli containing that protein molecule. We can calculate both of them. You see the idea is that all of them are actually calculated in handbooks, or if you Google it, you will find that the capacitance of two spheres, well known. We have done it in our college years, and this one, once again, the capacitance of a disc and the paraboli is also well known, and, therefore, we can just simply read it off and see what our results are without really having to solve this problem of Monte Carlo particles diffusing, finding the gate, and working the computer too much.
[Slide 4] How would you do this? The calculations of this is actually simple. So the way one would do it is essentially map this problem on this coordinate system. Now if you do this coordinate system, this parabola coordinate system, the difference between these two, four sides would be the two points here, and this would be, this side will be equal to this point. And so, therefore, all you have to do that since you already know the size of this, and you know the distance of this, and so, therefore, you have to construct a parabola which essentially has the same dimensions, and you can easily do that. For example, this equation of a parabola, and mu is a centricity factor, and c square cosine hyperbole square mu, and this is just by definition. This is x square divide by a square, y square divided by b square is equal to 1. And so, therefore, you can calculate the concentric cylinder capacitance. It is simply given by mu d. D is the diffusion co-efficient divided by pi. So once you calculate what this mu is, how would you know what the mu is? Well, you remember this a is equal to c cosine, cosine hyperbolic mu and b. So both we know a and b. Therefore, we can calculate mu, and if we can calculate mu and put it in, then this is the result of the capacitance associated with this deformed structure. So that is C D,SS, and that is inversely proportional to the narrow escape time. Why narrow escape time? Remember, that what-- because the formula looked almost the same as mean versus time also. The narrow escape time, because I have considered it as a disc rather than A sphere. That's the only change, and that allowed me to calculate. Now, of course, this particular mu d divided by pi, you can approximate it using this formula. You can work it out. The cosine hyperbole formula, get an expression for mu, and work it out, and you will find that this is a very simple relationship that relates the a. A's the radius of this disc with r is the distance from which the molecule was released and the diffusion co-efficient d and everything can be calculated as such. Now if you compare it with two concentric spheres that, or two concentric cylinders that we have discussed before, then the result is essentially within a factor of two. The results are identical. So you can compare the times. So the compared, the time comparison says the narrow escape time takes twice as long because, you know, it cannot escape through the hole as easily as the mean first passage time. So, therefore, in this particular configuration, it takes twice as much, twice as long. And, essentially, you can calculate this relationships by the simple method that we have just discussed, and you will get, essentially, the identical result.
[Slide 5] And if you plot the escape time, narrow escape time as a function of r and a, there'll be a corresponding curve as you are going away, farther and farther away from the system. The narrow escape time doesn't increase linearly, but it increases sublinearly for the system.
[Slide 6] Alright. So I just wanted to give you a flavor. This narrow escape time is an important quantity in biology, and, therefore, I just showed you how to calculate that escape time by using the diffusion equivalent capacitance. Now this is always narrow escape time. I explain why this is always slightly larger than the mean first passage time just from the physical ground, but you can always just, if you know the corresponding equivalent capacitance for these two structures, then you can always compare them for various configuration. For example, it could be one disc sitting in a paraboloid, or one flat plain straight, sitting in a cylindrical region. So our parabolic region. So, therefore, different configurations would allow you to calculate these numbers precisely. Now all three times, as we saw, I want to emphasize that these are statistical averages. That we're not talking about one protein when it gets out, but we're asking that from a given point, thousands of proteins starting from the same point on average when they get out. Narrow escape time the same way. And, obviously, the settling time is we're asking when the first ten molecules, ns number of molecules lands on the sensor surface. With this short discussion on narrow escape time,
[Slide 7] let me summarize the lectures that we have covered since Lecture Five. So Lectures five through twelve. So, what are the things that you should remember from this set of lectures?
[Slide 8] The first thing that I wish to emphasize the point that we made was the settling time. The time it takes for the molecules to arrive on the sensor surface is something that can be solved relatively easily, and that's just a fundamental lower limit about the detection. Rho naught is analyte density, and it is governed by the shape of the sensor. Not the surface-to-volume ratio, not electrostatics, but simply how a molecule randomly working around in the environment eventually finds this target, which is the sensor, and that relationship is fundamental, and technology agnostic. It doesn't matter whether you are thinking about amperometric sensors, potentiometric sensors, or cantilever based sensors, or any other sensors that we, you are individually working on. This is the result. That is true in general, and in that sense, it has similarity with the Heisenberg principle. That the longer you are willing to wait, more precisely a quantity you can measure. The longer you are willing to wait, lower the density that you can measure.
[Slide 9] Now that allowed us to essentially organize a broad variety of nanobiosensors that have been reported in literature. You see, depending on the history of a particular laboratory, the people generally work on amperometric sensors if that laboratory has an expertise on amperometric sensors. Potentiometric sensors similarly, there should be, there's local expertise. So people try to marry nanotechnology with potentiometric sensors. So everybody comes with the background and then marry the corresponding nanotechnology, bringing nanotechnology idea to nanobiosensing for that technology. But regardless of what specific technology they have bought, it turns out that nanoscale sensors give extraordinarily sensitivity.
[Slide 10] And we explain that by using this basic concept of a diffusion and capture problem. We solve the problem that we allowed the biomolecules to randomly work around, and once they come to the sensor surface of arbitrary geometry, they were captured by the receptors. Because the receptors is, like, the glue, and the flies were essentially stuck to the glue. That was the basic argument of the capture process, and this is where the flies are sort of moving around. Now diffusion capture problem, as I explained in Lecture Seven or so, that it's a very complicated problem. If you try to solve it in a computer, you could. Of course, there are computers that are getting powerful all the time, but the result you get gives you beautiful pictures, but doesn't give you very much deep insight. It doesn't tell you that if you change your sensors slightly this way or that way, how is the result change? You have to, again, do another simulation. So what we try to emphasize that a simpler analysis, analytical analysis that gives insight is probably more useful in some way.
[Slide 11] And the approach we took is a general approach, and from now on, you should be able to use it to solve any problem related to diffusion capture. It had four steps. The first step is that the total number of particle captured is proportional to the time you allow it to capture particles. Rho naught is how many-- what is the analyte density to begin with, and c is the diffusion capacitance, and that is the essential element. Because, you see, you don't really have to calculate c naught. You can be lazy. You can just, or at least I can just look up the, from electrostatic analog what the value of c naught is. So any sensor I have, you see, I can quickly construct an electrostatic equivalent and simply from a handbook look up what the corresponding value for c naught is. For example, remember the array sensors? All I had to do is not to solve any problem that was actually complicated, conformal mapping. That is how the original problem was solved. I don't have to do it anymore now. I can simply copy that formula. And everywhere there is an epsilon in the electrostatic problem, I replace with a diffusion co-efficient d, and everywhere there's a w, which is the separation between electrode, I replace it with 2nDt. N is the dimensionality of diffusion. It really doesn't matter very much what the exact value of n is or this prefactors are not that important. The important things are the disproportional to Dt. The third step is that once you get the value for c naught and put it in, then you calculate the time. We can calculate the time to take in, to catch a certain number of molecules, and the time taken is ts. That's the settling time. And once you have done that, you plot log of ts, log of the settling time because it will go orders of magnitude. It will go from, lets say millisecond to months. So, therefore, log would be good, and log of the analyte density, and once you plot it, then you'll be able to sort of see whether this particular sensor technology is better than others or not. And the example, for example, what we had was for a spherical sensor. Very quickly, we looked it up, 4 pi D. One over r1 divided by 1 over r2. That's the formula. You look it up. Replace epsilon with d, and w with the square root of dt. So that's the square root of dt, and then do the analysis as before. General approach should work for all situations. Most of this approximate solution, but should work in a wide variety of problems as you will see in the homework. If you have any questions regarding this, of course, I'll be happy to explain further.
[Slide 12] And once you have done that, if the sensor is defined by a fractal dimension, not all sensors are, can be defined by fractal dimension or single fractal dimension. In that case, we were able to find a very, very simple relationship that applies to broad variety of sensors and give us an organizing principle. Principle through which we could organize the sensors.
[Slide 13] And that was-- but before I get to that point about organization, I want to emphasize a very important point about settling time and Rho naught. You see, I drew a line after the problem was solved by diffusion equivalent capacitance, let's say for a nanowire sensor. There was this blue line. It's very important to emphasize or understand that this an ensemble average. Think about an infinite long sensor, and, in fact, you chop it up, in lets say ten micron byte. So, in fact, let's say you have 1,000 sensors. Now, of course, if accidentally, and so if you looked at the time at which every sensor responds to, let's say, you get a little blip every time the molecule sort of is captured by the sensor. You get a blip. If you plot it, there will be a distribution, lets say gaussian distribution. Let's say. Now it's important to realize that there will be sensors which will respond early. So if it is like a fire alarm that is sort of respond, activated by carbon monoxide in the environment, in that case, you don't care about density. All you want to know that there is carbon monoxide. In that case, the first response time is good enough. It should, the alarm should ring, and so this first response time is good enough. However, if you are interested in density, you want to know what is the density you want to measure, in that case, you have to measure the entire distribution, at least 95 percent of the total distribution because only then you'll be able to calculate the average response. For an unknown solution, if you want to calculate the density, the whole distribution is necessary, and that's what makes life difficult for measurement at very low density. Now this is an important distinction because often, but before I get to that point about the distinction, this rate point is, therefore, this picture, you should be viewing it as if it's a composite of that 1,000 sensor. So, individually, particles are moving around in every sensor, but when you put all these frames together, and view them together, you will see this diffusion front emerges only as an ensemble. Not as an individual sensors, and the response of individual sensor, and this is a very important distinction. Now it's important to remember that many times experimentally it is also reported up to a certain point because you know that initial analyte density, and, therefore, you are simply, as soon as you get a response, you say that I can detect such and such analyte density. You see, you have to be very careful here because even if the analyte density was somewhat lower, you could still detect it within that time. The thing is the inverse problem, which is unknown density you want to determine. In that case, you have to wait until this time. Before this, you have a reproducible answer that you can report with confidence.
[Slide 14] So this organization, this way of viewing things allowed us to organize the sensors that we have seen and that has been very helpful in terms of understanding why one type of sensor is far more sensitive than the other.
[Slide 15] Now then, I had this four strategies of beating the diffusion limit. Diffusion is something that is fundamental to the sensing problem, but we can beat it also. And idea always is that you reduce the diffusion time by either reducing L or by increasing D, and we saw that if you throw a lot of sensors to the analyte particle, then it effectively boxes it in, and, therefore, it reduces L. This is the same reason why when we take a tylenol remember the original problem. It had 1.3 millimolar concentration. It is sort of looking for that, the rare bacteria which is odd, the, not tylenol, lets say an antibacterial drug, and so it's looking for that bacteria. So at high density, it quickly catches on no matter why which corner it is hiding. If you sort of waited for the bacteria to come to the drug itself, then that would have taken a long time. So, therefore, simply throwing the drug to the target allows you at a relatively high density allows you to find it relatively quickly, and so that the action of the drug can proceed faster. The other approach was, remember, this lotus leaf effect and also the coffee ring effect. Essentially allowing, evaporating the droplets so that you sort of bring and concentrate the biomolecule and bring it to the sensor directly. Here, you don't have a lot sensors. Single sensors, but you are reducing the box L. And, finally, we talked about local generation, a topic that we'll discuss in greater detail towards the end of the course as well.
[Slide 16] The last enhanced, last biomimetic approach that we talked about was essentially using an antenna to enhance capture to the biosensor, and that analysis we saw essentially gives you an amplification which is proportional to LA, the distance where the once the particle is captured is delivered to the sensor surface and divided by the ratio, by the radius of this sphere, and as I explained that that is very similar to looking for a target in the DNA and also looking for a lost child in a city. So all these principles are biomimetic. We are learning from biology how to make technology better. That's the essence of the enhancement of nanobiosensors, or advantage of nanobiosensors.
[Slide 17] One strategy that didn't work so well was this strategy about flow, fluid flow. So if you try to put things in a channel and allowed the fluid to flow quickly, then we saw that if the sensor is very small, nanoscale. Let's say a nanotube, one nanometer diameter, then the total flux actually enhancement, relative enhancement is not significant. And the reason is explained in this picture it's a real picture where aluminum particles has been used to trace the flow lines, and this is sort of the cylinder sitting within the flow, and what happens that if you have a larger radius and a smaller stagnant radius, stagnant layer delta, then you get a big enhancement because diffusion has been sort of now, well, it has the molecules to only diffuse through this very small stagnant layer compared to a very large sensor. But when you go to nanoscale sensor, you know, that doesn't really help very much because the delta is comparable to the a. So you don't get as much, and that is the challenge of why the flow doesn't help you as much.
[Slide 18] So the bottom line is, then, this set of lectures allowed us to organize all the sensors. It allowed us to start from a classical sensors from 1970's. The, all the nanostructure sensors, nanowire. It allowed us to understand how bio barcode molecules work, sensors work. This is ten years old. This is more recent. About two years old now. Droplet base sensing. How these were, and why there are this 15 orders of magnitude, 15 orders of magnitude change in the analyte density that nanobiosensors can respond to.
[Slide 19] So the bottom line. Understanding the diffusion limit of the nanobiosensors is the first step that anyone should undertake before the sensors are fully understood, and then there are a number of biomimetic approaches. So we are copying from nature in order to understand the biosensors, but at the same time, beat the diffusion limit associated with them. So nature is helping us on both ways. And then, finally, the point I wish to make, the understanding that we gain is so general that we can now take it back to biological problem also. We discussed mean first passage time. We discussed, for example, the narrow escape time. These things all essentially are threaded together by a very simple concept of diffusion equivalent capacitance. That's it. In the next set of lectures, we'll be talking about biosensors specifically. Different types of biosensors. Potentiometric biosensors, amperometric and cantilever-based biosensors in a group of, let's say, three to four lectures each. The idea will be to calculate ns. What is the minimum number of particles do you need in order to activate the sensors? Of course, depending on the method of transduction, the result would be slightly different. And until that time, take care.