Jump to content

WEBFISHING

Just added!
Read more...

Monster Hunter Wilds

Just added!
Read more...

Oh Deer

The sensitivity slider is not accurate, expect some discrepancy. Use the config file for best accuracy.
Read more...

Fractal Block World

The sensitivity slider is not accurate, expect some discrepancy. Use the config file for best accuracy.
Read more...

Outpath

The sensitivity slider is not accurate, expect some discrepancy.
Read more...

Recommended Posts

Is there another forum we have migrated to? I notice Drimzi edited all his posts so not sure if he has started another forum with a better idea or if he just gave up trying to find a solution.

Either way I'd like to reflect on some of the things we've talked about in this forum...

 

When solving any problem there are two ways to go about things:

  1. The first way is to start with a set of assumptions and logically define a method. (induction)
  2. The second way is to gather data about various methods and identify patterns in the data to help us define a method. (deduction)

These two methods of science work interchangeably to come to a conclusion about things.

One of the problems with our discussion has been that we are too heavily defining the solution based on method number 2. It's been useful for collectively defining the basic skeleton of our conversion method (we can all agree that the correct solution lies somewhere between 0% and 100% monitor match, keeping in mind, for many games that use Hdeg 4:3, monitor match is dependent on aspect ratio). But we've gotten to a point where asking for a consensus about which formula is best isn't going to work anymore. At this point it may very well be personal preference, but I'd like to try addressing some of our assumptions as this might help clarify what we are looking at.

What do we know about aim so far:

  • FOV is added and cropped depending on your aspect ratio, so cm/360 should theoretically remain the same no matter what aspect ratio you use.
  • 0% MM and converting based on the VFOV are aspect-ratio dependent methods (if the game uses rectilinear projection).
  • 2D can be represented as 0 FOV.
  • At 0 FOV the circumferential rotation is equally distributed so it is best to use 100% MM at 0 FOV.
  • At 180 FOV the distortion is completely squished in the middle, so theoretically it is best to use 0% MM at 180 FOV.
  • As we approach 0 FOV all the common methods converge so it doesn't really matter which method we use past 30 FOV as they will all feel exactly the same. It matters more how the methods feel at higher FOVs because this is where the results differ the most, and as such, we should be testing our "feels" at higher FOV rather than 90 FOV and especially not 30 FOV.

So knowing this, we need to address some questions to define the assumptions we have about what makes aim consistent and allows us to develop muscle memory:

  • How does our brain perceive "sensitivity"? Is it retaining the same cm/360, screen distance matching, synchronizing FOVs using the gear ratio, maintaining Viewspeed, maintaining speed at the crosshair, or something else?
  • How do we compensate for distortion?
  • And how does it relate to different projection methods?
  • Does it matter if our formula can never be perfect? How good is the brain at adapting?

I don't think these are all separate questions, they all need to be addressed at the same time. I know the correct formula is so hard to prove without testing, but I really think we need the ideas to make sense first and foremost. It should get to a point where we don't even need to test with our mouse to know it's the correct method.

I also think something we need to remember is that if it weren't for the fact that most games use rectilinear projection then we would have a different set of problems. I think nothing will be as perfect as we want it to be. There is no such thing as a perfect conversion method. It is dependent on the projection method. We don't live in a perfect world - we have to make do with the cards life has given us. We have to "pick our poison". But this doesn't mean we can't try our best and pick the "best poison"... ;)

 

So we need to make sure we address the assumptions before we continue on our discussions. One of the biggest things we should finalize is: How does our brain perceive "sensitivity"?

One of the concepts we've been using to convert sensitivity has been to maintain "sensitivity". So I thought I might give my opinion on some of the methods:

cm/360 Method:

This one is probably the easiest to disprove because you really can tell by feel that this is incorrect but like I said, we should prove by idea and not by feel. So I'll explain the reason behind this: as we approach 0 FOV the sensitivity should slow down, and as we approach 180 FOV the sensitivity should speed up. But if your sensitivity stays the same, lower FOVs will feel too fast and higher FOVs will feel too slow. You can represent FOV with circles going around the edges of your monitor, as shown in this diagram: (the bigger the circle the lower the FOV and the greater the cm/360)

cm360.thumb.png.255b8a48bd3d217793529c2e7dca155c.png

Gear Ratio Method:

This one makes the most sense in theory, having all your FOVs synched together like the gears on a pulley, and from testing it has been found to yield exactly the same results as 100% MM, which I stated before would be the perfect method if the distortion was corrected and every point on the monitor was equally distributed. So this method actually would be perfect if it weren't for distortion. But since we are compensating for rectilinear distortion, the problem with this method and 100% MM is that it is far too slow at the center of the screen for higher FOVs. The same degree of rotation from the middle of the screen and from the edge of the screen translate to movement that is too slow at the center of the monitor in comparison to the edge of the monitor as you can see in this diagram:

100MM.png.a70399084e7b77400b84c59ec8de56e1.png

0%MM Method:

The 0% MM method suffers from the same problems as 100% MM but just in reverse order - too fast at the edges. It might actually be worse than 100% MM because at least 100% MM has a limit to how slow it can become as it approaches 180 FOV, whereas with 0% MM, the sensitivity also approaches 0cm/360. And if you were to convert your 2D sensitivity to 90 FOV using 0% MM then the lower FOVs would feel far too slow. The plus sides to this method is that it is aspect-ratio independent and the center of the crosshair feels the same - which is important when you want to maintain your consistency to micro-adjust. However, I personally think, apart from the fact that it is aspect-ratio independent, if you are going to choose a method that works best for aiming at the crosshair then you should use 20%, or something similar. This is because, when you are reacting to movement, the target is already off the center of your crosshair.

My method:

One of my main assumptions with developing this method has been the idea that we do not only aim at certain areas of the screen. A lot of the time we are snapping to various targets outside of our crosshairs. This is especially true in games like osu! and aim training where you are moving around the entire monitor. A big part of aim is not just to make micro-adjustments at the crosshair/cursor, but to utilize your muscle memory and snap to multiple targets in quick succession on the entire monitor. There's no point in using 0% MM if you keep under/overshooting your targets.

So what did I do? I created a simple formula to find the "middle ground" monitor match percentage, the point on the monitor where the distribution of distortion is equally split into two sections. What this does is it equalizes the entire screen so that the crosshair is not too slow and the edge of the monitor is not too fast - it is equally incorrect. It's still not perfect, but I think this is the best all round "poison". It minimizes the flaws of each method and allows you to use both playstyles. And in play testing I found that the center of crosshair was fine - it did not feel significantly different going from 90 FOV to 2D, and yet the rest of the screen didn't feel too fast like with 0% MM.

Keep in mind that you can use either HFOV or VFOV with this method. But using VFOV would make it aspect-ratio independent of course.

Aspect-ratio independence:

Hear me out on this - I really don't think your method needs to be aspect-ratio independent. As I said before one of the assumptions I made was that I wanted to use the entire screen. It's not like you are going to jump back and forth between 4:3 and 21:9 aspect ratios all the time. The aspect ratio you use isn't going to change. If you really wanted to work around this, you could just get a 1:1 aspect ratio monitor. But I don't think there is a point, because as I said - you have that extra screen space because you are going to use it... aren't you?

Just because 0% MM is the only truly aspect-ratio independent method doesn't mean it's the best method.

Other methods:

One of the assumptions that was made by Drimzi was that we perceive things in terms of 3D. I have to disagree - our eyes perceive in 2D. We live in a 3D world but what we are actually seeing are 2D images with our eyes. So when it comes to muscle memory, you are expecting to move a certain distance on the monitor based on how much you move the mouse.

But one of the interesting things, as I mentioned earlier, was that the method that matches based on 3D rotation - gear ratio method - would actually be correct if the gameworld projection was undistorted. So I think, 3D and 2D work together, but it's just the distortion that prevents this. I really don't think there's another way to reckon perceived sensitivity in light of the distortion but if anyone has any ideas, then I'm all ears.

Conclusion:

 

I mean, if game developers corrected the distortion then we wouldn't even be having this discussion because the obvious answer would be to use 100% MM. We really should tell developers to program their games properly. Ditch the rectilinear projection method and come up with something entirely new that has no distortion. Unlike correcting pictures from actual camera lenses, I think this should be very easy to do.

I'm open to being wrong about the things I've said, but I really think we should take a more inductive approach to this problem. It's clear to me now that it's a pick your poison kind of thing. But I believe there is a best poison. It's not exactly a preference, but I'm seriously okay with people using whatever the heck method they want to use. I'm more concerned about changing the way we make games.

Link to comment
1 hour ago, potato psoas said:

Is there another forum we have migrated to? I notice Drimzi edited all his posts so not sure if he has started another forum with a better idea or if he just gave up trying to find a solution.

Either way I'd like to reflect on some of the things we've talked about in this forum...

 

When solving any problem there are two ways to go about things:

  1. The first way is to start with a set of assumptions and logically define a method. (induction)
  2. The second way is to gather data about various methods and identify patterns in the data to help us define a method. (deduction)

These two methods of science work interchangeably to come to a conclusion about things.

One of the problems with our discussion has been that we are too heavily defining the solution based on method number 2. It's been useful for collectively defining the basic skeleton of our conversion method (we can all agree that the correct solution lies somewhere between 0% and 100% monitor match, keeping in mind, for many games that use Hdeg 4:3, monitor match is dependent on aspect ratio). But we've gotten to a point where asking for a consensus about which formula is best isn't going to work anymore. At this point it may very well be personal preference, but I'd like to try addressing some of our assumptions as this might help clarify what we are looking at.

What do we know about aim so far:

  • FOV is added and cropped depending on your aspect ratio, so cm/360 should theoretically remain the same no matter what aspect ratio you use.
  • 0% MM and converting based on the VFOV are aspect-ratio dependent methods (if the game uses rectilinear projection).
  • 2D can be represented as 0 FOV.
  • At 0 FOV the circumferential rotation is equally distributed so it is best to use 100% MM at 0 FOV.
  • At 180 FOV the distortion is completely squished in the middle, so theoretically it is best to use 0% MM at 180 FOV.
  • As we approach 0 FOV all the common methods converge so it doesn't really matter which method we use past 30 FOV as they will all feel exactly the same. It matters more how the methods feel at higher FOVs because this is where the results differ the most, and as such, we should be testing our "feels" at higher FOV rather than 90 FOV and especially not 30 FOV.

So knowing this, we need to address some questions to define the assumptions we have about what makes aim consistent and allows us to develop muscle memory:

  • How does our brain perceive "sensitivity"? Is it retaining the same cm/360, screen distance matching, synchronizing FOVs using the gear ratio, maintaining Viewspeed, maintaining speed at the crosshair, or something else?
  • How do we compensate for distortion?
  • And how does it relate to different projection methods?
  • Does it matter if our formula can never be perfect? How good is the brain at adapting?

I don't think these are all separate questions, they all need to be addressed at the same time. I know the correct formula is so hard to prove without testing, but I really think we need the ideas to make sense first and foremost. It should get to a point where we don't even need to test with our mouse to know it's the correct method.

I also think something we need to remember is that if it weren't for the fact that most games use rectilinear projection then we would have a different set of problems. I think nothing will be as perfect as we want it to be. There is no such thing as a perfect conversion method. It is dependent on the projection method. We don't live in a perfect world - we have to make do with the cards life has given us. We have to "pick our poison". But this doesn't mean we can't try our best and pick the "best poison"... ;)

 

So we need to make sure we address the assumptions before we continue on our discussions. One of the biggest things we should finalize is: How does our brain perceive "sensitivity"?

One of the concepts we've been using to convert sensitivity has been to maintain "sensitivity". So I thought I might give my opinion on some of the methods:

cm/360 Method:

This one is probably the easiest to disprove because you really can tell by feel that this is incorrect but like I said, we should prove by idea and not by feel. So I'll explain the reason behind this: as we approach 0 FOV the sensitivity should slow down, and as we approach 180 FOV the sensitivity should speed up. But if your sensitivity stays the same, lower FOVs will feel too fast and higher FOVs will feel too slow. You can represent FOV with circles going around the edges of your monitor, as shown in this diagram: (the bigger the circle the lower the FOV and the greater the cm/360)

cm360.thumb.png.255b8a48bd3d217793529c2e7dca155c.png

Gear Ratio Method:

This one makes the most sense in theory, having all your FOVs synched together like the gears on a pulley, and from testing it has been found to yield exactly the same results as 100% MM, which I stated before would be the perfect method if the distortion was corrected and every point on the monitor was equally distributed. So this method actually would be perfect if it weren't for distortion. But since we are compensating for rectilinear distortion, the problem with this method and 100% MM is that it is far too slow at the center of the screen for higher FOVs. The same degree of rotation from the middle of the screen and from the edge of the screen translate to movement that is too slow at the center of the monitor in comparison to the edge of the monitor as you can see in this diagram:

100MM.png.a70399084e7b77400b84c59ec8de56e1.png

0%MM Method:

The 0% MM method suffers from the same problems as 100% MM but just in reverse order - too fast at the edges. It might actually be worse than 100% MM because at least 100% MM has a limit to how slow it can become as it approaches 180 FOV, whereas with 0% MM, the sensitivity also approaches 0cm/360. And if you were to convert your 2D sensitivity to 90 FOV using 0% MM then the lower FOVs would feel far too slow. The plus sides to this method is that it is aspect-ratio independent and the center of the crosshair feels the same - which is important when you want to maintain your consistency to micro-adjust. However, I personally think, apart from the fact that it is aspect-ratio independent, if you are going to choose a method that works best for aiming at the crosshair then you should use 20%, or something similar. This is because, when you are reacting to movement, the target is already off the center of your crosshair.

My method:

One of my main assumptions with developing this method has been the idea that we do not only aim at certain areas of the screen. A lot of the time we are snapping to various targets outside of our crosshairs. This is especially true in games like osu! and aim training where you are moving around the entire monitor. A big part of aim is not just to make micro-adjustments at the crosshair/cursor, but to utilize your muscle memory and snap to multiple targets in quick succession on the entire monitor. There's no point in using 0% MM if you keep under/overshooting your targets.

So what did I do? I created a simple formula to find the "middle ground" monitor match percentage, the point on the monitor where the distribution of distortion is equally split into two sections. What this does is it equalizes the entire screen so that the crosshair is not too slow and the edge of the monitor is not too fast - it is equally incorrect. It's still not perfect, but I think this is the best all round "poison". It minimizes the flaws of each method and allows you to use both playstyles. And in play testing I found that the center of crosshair was fine - it did not feel significantly different going from 90 FOV to 2D, and yet the rest of the screen didn't feel too fast like with 0% MM.

Keep in mind that you can use either HFOV or VFOV with this method. But using VFOV would make it aspect-ratio independent of course.

Aspect-ratio independence:

Hear me out on this - I really don't think your method needs to be aspect-ratio independent. As I said before one of the assumptions I made was that I wanted to use the entire screen. It's not like you are going to jump back and forth between 4:3 and 21:9 aspect ratios all the time. The aspect ratio you use isn't going to change. If you really wanted to work around this, you could just get a 1:1 aspect ratio monitor. But I don't think there is a point, because as I said - you have that extra screen space because you are going to use it... aren't you?

Just because 0% MM is the only truly aspect-ratio independent method doesn't mean it's the best method.

Other methods:

One of the assumptions that was made by Drimzi was that we perceive things in terms of 3D. I have to disagree - our eyes perceive in 2D. We live in a 3D world but what we are actually seeing are 2D images with our eyes. So when it comes to muscle memory, you are expecting to move a certain distance on the monitor based on how much you move the mouse.

But one of the interesting things, as I mentioned earlier, was that the method that matches based on 3D rotation - gear ratio method - would actually be correct if the gameworld projection was undistorted. So I think, 3D and 2D work together, but it's just the distortion that prevents this. I really don't think there's another way to reckon perceived sensitivity in light of the distortion but if anyone has any ideas, then I'm all ears.

Conclusion:

 

I mean, if game developers corrected the distortion then we wouldn't even be having this discussion because the obvious answer would be to use 100% MM. We really should tell developers to program their games properly. Ditch the rectilinear projection method and come up with something entirely new that has no distortion. Unlike correcting pictures from actual camera lenses, I think this should be very easy to do.

I'm open to being wrong about the things I've said, but I really think we should take a more inductive approach to this problem. It's clear to me now that it's a pick your poison kind of thing. But I believe there is a best poison. It's not exactly a preference, but I'm seriously okay with people using whatever the heck method they want to use. I'm more concerned about changing the way we make games.

Im not sure where drimzi has gone either, but as far as Im aware we havent migrated. Could you give a link to an equation or excel spreadsheet for your method? I know the one you linked earlier you had said was probably incorrect

Link to comment
2 hours ago, KandiVan said:

Im not sure where drimzi has gone either, but as far as Im aware we havent migrated. Could you give a link to an equation or excel spreadsheet for your method? I know the one you linked earlier you had said was probably incorrect

Yeah I can't remember exactly what I said either, but this is definitely a solid (and simple) formula.

You don't even need the monitor match formula for this because the calculator now allows you to input custom monitor distances. What you do is use the formula: COS(FOV/2) to calculate what monitor match percentage you use for each FOV. Keep in mind if you want things to be aspect-ratio independent you use the VFOV instead of the HFOV.

E.g. If the VFOV is 73.74, then the VFOV MM% = COS(73.74/2) = ~79.99989281%

Keep in mind that the calculator uses the Horizontal Monitor Width to do its calculations so you will need to convert VFOV MM% to HFOV:

I.e. HFOV MM% = (10/16)*VFOV MM% = 49.375%

Here is an example of what this should look like in the calculator:

example.thumb.png.b559cec4edcfff1dd97ae9eefef09bd8.png

COS(FOV/2) simply finds the "middle ground" monitor match percentage. It means the distance to move to the center of the screen and the edge of the screen are equal, thus allowing all points on the monitor to feel more correctly converted. It scales from 100% MM at 0 FOV to 0% MM at 180 FOV using the curve of the unit circle (which I thought was appropriate). It's not perfect, but as I said, it's what I think is the best poison.

One of the interesting things I've been doing has been to combine different methods when changing aim type. I've found that it's simply easier to play with Call of Duty's default 0% MM and not adjust the DPI on my mouse, as long as I un-ADS before I move to acquire a new target. Though it takes a bit of getting used to. And I found it had another benefit, that, aside from maintaining muscle memory, you can acquire targets faster.

Edited by potato psoas
Link to comment
7 hours ago, potato psoas said:

Is there another forum we have migrated to? I notice Drimzi edited all his posts so not sure if he has started another forum with a better idea or if he just gave up trying to find a solution.

Either way I'd like to reflect on some of the things we've talked about in this forum...

 

When solving any problem there are two ways to go about things:

  1. The first way is to start with a set of assumptions and logically define a method. (induction)
  2. The second way is to gather data about various methods and identify patterns in the data to help us define a method. (deduction)

These two methods of science work interchangeably to come to a conclusion about things.

One of the problems with our discussion has been that we are too heavily defining the solution based on method number 2. It's been useful for collectively defining the basic skeleton of our conversion method (we can all agree that the correct solution lies somewhere between 0% and 100% monitor match, keeping in mind, for many games that use Hdeg 4:3, monitor match is dependent on aspect ratio). But we've gotten to a point where asking for a consensus about which formula is best isn't going to work anymore. At this point it may very well be personal preference, but I'd like to try addressing some of our assumptions as this might help clarify what we are looking at.

What do we know about aim so far:

  • FOV is added and cropped depending on your aspect ratio, so cm/360 should theoretically remain the same no matter what aspect ratio you use.
  • 0% MM and converting based on the VFOV are aspect-ratio dependent methods (if the game uses rectilinear projection).
  • 2D can be represented as 0 FOV.
  • At 0 FOV the circumferential rotation is equally distributed so it is best to use 100% MM at 0 FOV.
  • At 180 FOV the distortion is completely squished in the middle, so theoretically it is best to use 0% MM at 180 FOV.
  • As we approach 0 FOV all the common methods converge so it doesn't really matter which method we use past 30 FOV as they will all feel exactly the same. It matters more how the methods feel at higher FOVs because this is where the results differ the most, and as such, we should be testing our "feels" at higher FOV rather than 90 FOV and especially not 30 FOV.

So knowing this, we need to address some questions to define the assumptions we have about what makes aim consistent and allows us to develop muscle memory:

  • How does our brain perceive "sensitivity"? Is it retaining the same cm/360, screen distance matching, synchronizing FOVs using the gear ratio, maintaining Viewspeed, maintaining speed at the crosshair, or something else?
  • How do we compensate for distortion?
  • And how does it relate to different projection methods?
  • Does it matter if our formula can never be perfect? How good is the brain at adapting?

I don't think these are all separate questions, they all need to be addressed at the same time. I know the correct formula is so hard to prove without testing, but I really think we need the ideas to make sense first and foremost. It should get to a point where we don't even need to test with our mouse to know it's the correct method.

I also think something we need to remember is that if it weren't for the fact that most games use rectilinear projection then we would have a different set of problems. I think nothing will be as perfect as we want it to be. There is no such thing as a perfect conversion method. It is dependent on the projection method. We don't live in a perfect world - we have to make do with the cards life has given us. We have to "pick our poison". But this doesn't mean we can't try our best and pick the "best poison"... ;)

 

So we need to make sure we address the assumptions before we continue on our discussions. One of the biggest things we should finalize is: How does our brain perceive "sensitivity"?

One of the concepts we've been using to convert sensitivity has been to maintain "sensitivity". So I thought I might give my opinion on some of the methods:

cm/360 Method:

This one is probably the easiest to disprove because you really can tell by feel that this is incorrect but like I said, we should prove by idea and not by feel. So I'll explain the reason behind this: as we approach 0 FOV the sensitivity should slow down, and as we approach 180 FOV the sensitivity should speed up. But if your sensitivity stays the same, lower FOVs will feel too fast and higher FOVs will feel too slow. You can represent FOV with circles going around the edges of your monitor, as shown in this diagram: (the bigger the circle the lower the FOV and the greater the cm/360)

cm360.thumb.png.255b8a48bd3d217793529c2e7dca155c.png

Gear Ratio Method:

This one makes the most sense in theory, having all your FOVs synched together like the gears on a pulley, and from testing it has been found to yield exactly the same results as 100% MM, which I stated before would be the perfect method if the distortion was corrected and every point on the monitor was equally distributed. So this method actually would be perfect if it weren't for distortion. But since we are compensating for rectilinear distortion, the problem with this method and 100% MM is that it is far too slow at the center of the screen for higher FOVs. The same degree of rotation from the middle of the screen and from the edge of the screen translate to movement that is too slow at the center of the monitor in comparison to the edge of the monitor as you can see in this diagram:

100MM.png.a70399084e7b77400b84c59ec8de56e1.png

0%MM Method:

The 0% MM method suffers from the same problems as 100% MM but just in reverse order - too fast at the edges. It might actually be worse than 100% MM because at least 100% MM has a limit to how slow it can become as it approaches 180 FOV, whereas with 0% MM, the sensitivity also approaches 0cm/360. And if you were to convert your 2D sensitivity to 90 FOV using 0% MM then the lower FOVs would feel far too slow. The plus sides to this method is that it is aspect-ratio independent and the center of the crosshair feels the same - which is important when you want to maintain your consistency to micro-adjust. However, I personally think, apart from the fact that it is aspect-ratio independent, if you are going to choose a method that works best for aiming at the crosshair then you should use 20%, or something similar. This is because, when you are reacting to movement, the target is already off the center of your crosshair.

My method:

One of my main assumptions with developing this method has been the idea that we do not only aim at certain areas of the screen. A lot of the time we are snapping to various targets outside of our crosshairs. This is especially true in games like osu! and aim training where you are moving around the entire monitor. A big part of aim is not just to make micro-adjustments at the crosshair/cursor, but to utilize your muscle memory and snap to multiple targets in quick succession on the entire monitor. There's no point in using 0% MM if you keep under/overshooting your targets.

So what did I do? I created a simple formula to find the "middle ground" monitor match percentage, the point on the monitor where the distribution of distortion is equally split into two sections. What this does is it equalizes the entire screen so that the crosshair is not too slow and the edge of the monitor is not too fast - it is equally incorrect. It's still not perfect, but I think this is the best all round "poison". It minimizes the flaws of each method and allows you to use both playstyles. And in play testing I found that the center of crosshair was fine - it did not feel significantly different going from 90 FOV to 2D, and yet the rest of the screen didn't feel too fast like with 0% MM.

Keep in mind that you can use either HFOV or VFOV with this method. But using VFOV would make it aspect-ratio independent of course.

Aspect-ratio independence:

Hear me out on this - I really don't think your method needs to be aspect-ratio independent. As I said before one of the assumptions I made was that I wanted to use the entire screen. It's not like you are going to jump back and forth between 4:3 and 21:9 aspect ratios all the time. The aspect ratio you use isn't going to change. If you really wanted to work around this, you could just get a 1:1 aspect ratio monitor. But I don't think there is a point, because as I said - you have that extra screen space because you are going to use it... aren't you?

Just because 0% MM is the only truly aspect-ratio independent method doesn't mean it's the best method.

Other methods:

One of the assumptions that was made by Drimzi was that we perceive things in terms of 3D. I have to disagree - our eyes perceive in 2D. We live in a 3D world but what we are actually seeing are 2D images with our eyes. So when it comes to muscle memory, you are expecting to move a certain distance on the monitor based on how much you move the mouse.

But one of the interesting things, as I mentioned earlier, was that the method that matches based on 3D rotation - gear ratio method - would actually be correct if the gameworld projection was undistorted. So I think, 3D and 2D work together, but it's just the distortion that prevents this. I really don't think there's another way to reckon perceived sensitivity in light of the distortion but if anyone has any ideas, then I'm all ears.

Conclusion:

 

I mean, if game developers corrected the distortion then we wouldn't even be having this discussion because the obvious answer would be to use 100% MM. We really should tell developers to program their games properly. Ditch the rectilinear projection method and come up with something entirely new that has no distortion. Unlike correcting pictures from actual camera lenses, I think this should be very easy to do.

I'm open to being wrong about the things I've said, but I really think we should take a more inductive approach to this problem. It's clear to me now that it's a pick your poison kind of thing. But I believe there is a best poison. It's not exactly a preference, but I'm seriously okay with people using whatever the heck method they want to use. I'm more concerned about changing the way we make games.

Not sure how you conclude 100% MM is a good route to take. Like you explained in your post above, 100% is accurate at the edge of the 16:9 screen, why in the world would you want to match on the edge versus the center? Here's a tip, from someone who has tried virtually all methods at one point or another. Stick to what you like, if you like 100% you will be good with it, once enough time has passed and you have enough experience with it or any other method it will be good for you. 0% is objectively ideal for aiming around the center of the screen, but I used 75% for years with just as good of results. I have hard switched to zero and now, I am just as good with it as I was with 75%. 

There is no "best" sensitivity, the only fact is that you need to account for zoomed ratios, outside of that use whatever you like and even if you don't like it, with enough time it will become natural to you. Ideally, you should try and match hipfire FOVs in your games if not, a higher match percentage might be less jarring. In those cases, 75% and 100% are very similar, but they really are all close even on Quake Champions extreme FOV of 130. So yeah, if you were using games with drastically different FOVs, then higher match typically would work better, but the fact would still remain, it would be as accurate in the center of the screen as 0%.

Link to comment
7 hours ago, Bryjoe said:

Not sure how you conclude 100% MM is a good route to take. Like you explained in your post above, 100% is accurate at the edge of the 16:9 screen, why in the world would you want to match on the edge versus the center? Here's a tip, from someone who has tried virtually all methods at one point or another. Stick to what you like, if you like 100% you will be good with it, once enough time has passed and you have enough experience with it or any other method it will be good for you. 0% is objectively ideal for aiming around the center of the screen, but I used 75% for years with just as good of results. I have hard switched to zero and now, I am just as good with it as I was with 75%. 

There is no "best" sensitivity, the only fact is that you need to account for zoomed ratios, outside of that use whatever you like and even if you don't like it, with enough time it will become natural to you. Ideally, you should try and match hipfire FOVs in your games if not, a higher match percentage might be less jarring. In those cases, 75% and 100% are very similar, but they really are all close even on Quake Champions extreme FOV of 130. So yeah, if you were using games with drastically different FOVs, then higher match typically would work better, but the fact would still remain, it would be as accurate in the center of the screen as 0%.

I didn't say it was a good route to take because of distortion, but without distortion, the projection would be evenly distributed along the monitor, and therefore, if you match mouse movement to the edge of the monitor (100% MM) then all points on the monitor would be correct. But I never said it was the best method. I agree with what you say and a lot of it has to do with personal preference and playstyle, but remember to consider the assumptions behind your arguments because they determine which method you prefer.

The main assumption I use is that it is better to have the whole screen equally "useable" than to only have one point on the screen useable and the rest completely unuseable, which is why I came up with the middle ground approach. So I don't use 100% MM with rectilinear games, I use a scaling monitor match formula which scales from 100% at 0 FOV to 0% at 180 FOV along the unit circle, allowing a smoother and more consistent approach to sensitivity conversion.

Link to comment
13 minutes ago, potato psoas said:

I didn't say it was a good route to take because of distortion, but without distortion, the projection would be evenly distributed along the monitor, and therefore, if you match mouse movement to the edge of the monitor (100% MM) then all points on the monitor would be correct. But I never said it was the best method. I agree with what you say and a lot of it has to do with personal preference and playstyle, but remember to consider the assumptions behind your arguments because they determine which method you prefer.

The main assumption I use is that it is better to have the whole screen equally "useable" than to only have one point on the screen useable and the rest completely unuseable, which is why I came up with the middle ground approach. So I don't use 100% MM with rectilinear games, I use a scaling monitor match formula which scales from 100% at 0 FOV to 0% at 180 FOV along the unit circle, allowing a smoother and more consistent approach to sensitivity conversion.

True, I think from all this discussion over the last year, when the goal was to match games comfortably with different  FOVs. It's just kind of a band-aid on a situation that is not optimal. A bunch of questions arise, like is it better to go with a match percentage that is as close to 360 distance as possible, or better to go with a match percentage you're used to and risk having movement be totally off in certain games? I think it's obvious which solution would be better for your aim (the matching method you're used to). So, then the "catch-all" solution would be: match your FOV hipfire and then use preferred matching method for scopes. If you can't match hipfire FOV or don't want to use the preferred match for aim, match on 360 for movement.

Edited by Bryjoe
Link to comment
34 minutes ago, Bryjoe said:

True, I think from all this discussion over the last year, when the goal was to match games comfortably with different  FOVs. It's just kind of a band-aid on a situation that is not optimal. A bunch of questions arise, like is it better to go with a match percentage that is as close to 360 distance as possible, or better to go with a match percentage you're used to and risk having movement be totally off in certain games? I think it's obvious which solution would be better for your aim (the matching method you're used to). So, then the "catch-all" solution would be: match your FOV hipfire and then use preferred matching method for scopes. If you can't match hipfire FOV or don't want to use the preferred match for aim, match on 360 for movement.

Yeah, although I think something can be said about understanding which flaws you're willing to accept for each method, it really does come down to practice. All these methods are just as bad as each other so the most important thing for building muscle memory is to stick to it and not change your settings. And try not to have too many different fields of view that you practice on because for each field of view you're learning a completely new layout of distortion.

Link to comment

I settled on aspect ratio independent 100% monitor match. Since monitors aren't square, for 99.99% of people out there, this is using the vertical fov, not horizontal. It's the same as the "Gear Ratio Method" in potato psoas' post, for example, 90 vfov will have half the cm/360 of 45 vfov. If I remember correctly, this should have been the correct method if you ignored rectilinear distortion, which seems to be impossible to compensate for anyway. After rigorous testing, this felt the best to me, and it still feels the best. In the end, you should be testing each method yourself, as I did, and decide for yourself what the best solution is.

I still firmly believe that a method should be aspect ratio independent, as the overall image and distortion is dependent on the 1:1 aspect ratio's fov. The rest is merely added/cropped depending on your monitor, and you can choose to obstruct additional pixels, or use pillarboxing or letterboxing, without affecting anything. I am not saying that a monitor match method that exceeds the 1:1 dimensions is wrong, I am saying if you change the aspect ratio, the results should not change. So instead of imagining the monitor match as a percentage of the horizontal, think of it as a percentage (or coefficient) of the 1:1 aspect ratio, like Battlefield's Uniform Soldier Aiming. 75% of 16:9 should instead be interpreted as a 4/3 coefficient or 133.33%.

I removed all past posts because they only applied at the time when they were made. They shouldn't be required for anybody today, and I don't want people reading outdated posts after being bought in by search results or from google.

 

edit: Gone back to Viewspeed v2. After using a better script for testing the scaling, Viewspeed v2 was the ONLY method that actually felt identical at all FOV. The only reason Viewspeed v2 feels slower ingame and faster on desktop is due to comparing a flat, uniform medium to aim on, vs an arc with non uniform distortion, with a lot more information stored in the center.

Edited by Drimzi
Link to comment

A more current discussion on sensitivity conversion can be found here. If no current solution feels right to you, then click that link. It has a very nice comprehensive list of material to read through if you want to understand the topic and contribute to a working solution.

Link to comment
23 hours ago, potato psoas said:

0% MM and converting based on the VFOV are aspect-ratio dependent methods

The first isn't at all and the second only kind of is. Like yeah 100% 16:9 isn't the same as 100% 4:3 but just in the same way that 90° 16:9 isn't the same thing as 90° 4:3. If you take into account the aspect ratio difference you can make them exactly the same.

22 hours ago, KandiVan said:

2D can be represented as 0 FOV.

I wouldn't say we know this. If it were then you'd be able to desktop to game conversions using one of the "zoom levels" as 0 fov. If this has been shown I've missed it.

16 hours ago, Bryjoe said:

At 0 FOV the circumferential rotation is equally distributed so it is best to use 100% MM at 0 FOV.

As the higher FOV value (i.e. hipfire) approaches 0 FOV all methods become exactly the same, so none is "best" or even any different here.

23 hours ago, potato psoas said:

at least 100% MM has a limit to how slow it can become as it approaches 180 FOV, whereas with 0% MM, the sensitivity also approaches 0cm/360.

As you approach 180° with 0% it approaches infinite distance/360. This makes sense because as you approach 180° FOV each additional amount of FOV takes up an amount of screen approaching infinity. If you have an infinite amount of zoom it'd follow logically that the distance/360 would change infinitely. This is also why you can't have 180° of FOV, you'd need an infinite amount of screen space. 0% also has other benefits: https://imgur.com/a/szjlq

23 hours ago, potato psoas said:

I mean, if game developers corrected the distortion then we wouldn't even be having this discussion because the obvious answer would be to use 100% MM. We really should tell developers to program their games properly. Ditch the rectilinear projection method and come up with something entirely new that has no distortion. Unlike correcting pictures from actual camera lenses, I think this should be very easy to do.

Literally impossible to have a distortionless 2D projection. Rectilinear is used because it keeps straight lines straight. With other methods a straight line will appear curved. Sure, with rectilinear that line may appear stretched, but at least it's still straight.

21 hours ago, potato psoas said:

You don't even need the monitor match formula for this because the calculator now allows you to input custom monitor distances. What you do is use the formula: COS(FOV/2) to calculate what monitor match percentage you use for each FOV. Keep in mind if you want things to be aspect-ratio independent you use the VFOV instead of the HFOV.

I've said it before, and with graphs, that scaling match distance is bad. Idk what else to do but provide more graphs.

https://www.desmos.com/calculator/ifgczuhynp

These graphs are the resulting sensitivity multiplier if you halve FOV from X. (i.e. going from x=90 to 45, or x=170 to 85).

The black line is your method.

Another example (that I've posted before):

https://www.desmos.com/calculator/j1cjrnncvr

Graphs are the resulting sensitivity multiplier with a hipfire fov of 103 and a zoom FOV of X.

Again the black line is your method.

Edited by Skwuruhl
Link to comment
5 hours ago, Drimzi said:

A more current discussion on sensitivity conversion can be found here. If no current solution feels right to you, then click that link. It has a very nice comprehensive list of material to read through if you want to understand the topic and contribute to a working solution.

Sounds like a good idea.

Link to comment
5 hours ago, Drimzi said:

I settled on aspect ratio independent 100% monitor match. Since monitors aren't square, for 99.99% of people out there, this is using the vertical fov, not horizontal. It's the same as the "Gear Ratio Method" in potato psoas' post, for example, 90 vfov will have half the cm/360 of 45 vfov. If I remember correctly, this should have been the correct method if you ignored rectilinear distortion, which seems to be impossible to compensate for anyway. After rigorous testing, this felt the best to me, and it still feels the best. In the end, you should be testing each method yourself, as I did, and decide for yourself what the best solution is.

I still firmly believe that a method should be aspect ratio independent, as the overall image and distortion is dependent on the 1:1 aspect ratio's fov. The rest is merely added/cropped depending on your monitor, and you can choose to obstruct additional pixels, or use pillarboxing or letterboxing, without affecting anything. I am not saying that a monitor match method that exceeds the 1:1 dimensions is wrong, I am saying if you change the aspect ratio, the results should not change. So instead of imagining the monitor match as a percentage of the horizontal, think of it as a percentage (or coefficient) of the 1:1 aspect ratio, like Battlefield's Uniform Soldier Aiming. 75% of 16:9 should instead be interpreted as a 4/3 coefficient or 133.33%.

I removed all past posts because they only applied at the time when they were made. They shouldn't be required for anybody today, and I don't want people reading outdated posts after being bought in by search results or from google.

Ok wait so you are using the gear ratio method?? I was wondering what kind of formula you had created.

Link to comment
21 hours ago, Skwuruhl said:

The first isn't at all and the second only kind of is. Like yeah 100% 16:9 isn't the same as 100% 4:3 but just in the same way that 90° 16:9 isn't the same thing as 90° 4:3. If you take into account the aspect ratio difference you can make them exactly the same.

Ah I meant to say independent not dependent.

21 hours ago, Skwuruhl said:

I wouldn't say we know this. If it were then you'd be able to desktop to game conversions using one of the "zoom levels" as 0 fov. If this has been shown I've missed it.

I'll get round to explaining this because it is something I can prove.

21 hours ago, Skwuruhl said:

As the higher FOV value (i.e. hipfire) approaches 0 FOV all methods become exactly the same, so none is "best" or even any different here.

That's true, but the point I was trying to make was that when there is no distortion, all points on the monitor are equally distributed, so the best method is to use would be 100% MM. This is one of the reasons why I made the assumption that the best formula should scale from 100% MM at 0 FOV to 0% MM at 180 FOV.

21 hours ago, Skwuruhl said:

As you approach 180° with 0% it approaches infinite distance/360. This makes sense because as you approach 180° FOV each additional amount of FOV takes up an amount of screen approaching infinity. If you have an infinite amount of zoom it'd follow logically that the distance/360 would change infinitely. This is also why you can't have 180° of FOV, you'd need an infinite amount of screen space. 0% also has other benefits: https://imgur.com/a/szjlq

Well when I say 0cm/360 I mean, as you approach 180 FOV, it takes less and less mouse distance to do a 360, until you reach the error of doing 0cm for a 360 at 180 FOV.

21 hours ago, Skwuruhl said:

Literally impossible to have a distortionless 2D projection.

I have some ideas on how to do it >.>

21 hours ago, Skwuruhl said:

I've said it before, and with graphs, that scaling match distance is bad. Idk what else to do but provide more graphs.

https://www.desmos.com/calculator/ifgczuhynp

These graphs are the resulting sensitivity multiplier if you halve FOV from X. (i.e. going from x=90 to 45, or x=170 to 85).

The black line is your method.

I don't understand how this proves it to be wrong. Scaling is not a bad assumption, it just dynamically changes the monitor match -> MM(FOV). Approaching 0 FOV, it converges like the rest of them, and approaching 180 FOV it becomes more like 0% MM - it stays well within the expected pattern of the other methods. Even in testing, it has no worse a feel than the others. You'll have to better explain what you mean cause I really just don't get it.

Edited by potato psoas
Link to comment
16 hours ago, potato psoas said:

Ok wait so you are using the gear ratio method?? I was wondering what kind of formula you had created.

Yeah, with the exception of clamping to 1:1 aspect ratio. The angular increment is equal to a pixel if you could evenly distribute the field of view. Obviously the game doesn't do this, but it still felt like a perfect conversion when seamlessly transitioning from 10 FOV to 150 FOV using a script in CSGO.

...

I'm not sure if the desktop can be considered as 0 FOV. For monitor match methods, the 2D distance is used as the arc length for 180 FOV.

Also in the formula for converting the sensitivity between games, you just specify the the desktop as a game with a sensitivity value of 1/resolution height * wps, with 1 fov and 1 yaw/pitch.

Link to comment
1 hour ago, Drimzi said:

I'm not sure if the desktop can be considered as 0 FOV. For monitor match methods, the 2D distance is used as the arc length for 180 FOV.

image.png.3357a6baa22138210f85ff37e2ac9897.png

This diagram represents how rectilinear projection works - it shows a progression of FOVs. As the FOV decreases, the circles get bigger, the circumferences increase and the arc between the bounds of the monitor edges becomes flatter. Then as you approach 0 FOV, the circumference approaches infinity and the arc, and even the rest of the circle, becomes completely flat, until it is considered 2D. 0 FOV is both 2D and 3D.

And even though we can't define 2D in terms of cm/360, because the circumference of 0 FOV is infinitely long, its "sensitivity" can be defined another way... the chord length. If the FOVs all share the same chord length, then the length of the chord also determines the circumference for all the fields of view. It's in the diagram, so the proof is in the pudding. So whatever your 2D edge-to-edge sensitivity is, you can use trigonometry to convert it to a 3D sensitivity and vice-versa.

And from testing, this method yields the same results as 100% MM and the gear ratio method, but with the addition of not getting an error at 0 FOV. It's just another way to look at things, and only further solidifies the fact that 100% MM would be the only true method, if it weren't for distortion.

Link to comment
  • 1 month later...

Viewspeed v2 is suppose to use the shortest measurement. Not vertical in particular. It should be horizontal in portrait orientation rather than landscape.

The multiplier is: sin(SquareDegrees * pi/360)

Viewspeed v1 uses horizontal degrees, and is pointless to use.

 

Edited by Drimzi
Link to comment
  • 5 months later...

Is this discussion still going on here, or somewhere else? Also, can I get the formula for the viewspeed vertical setting that's currently being used in the calculator?

Nvm, I found the post, my bad. If you have it, you can still give me the formula here.

Edited by Kilroy
Link to comment
14 hours ago, Kilroy said:

Is this discussion still going on here, or somewhere else? Also, can I get the formula for the viewspeed vertical setting that's currently being used in the calculator?

Nvm, I found the post, my bad. If you have it, you can still give me the formula here.

It's been pretty much conclusively determined that monitor distance 0% is the "one size fits all" solution. It is the most direct way to scale your hipfire sensitivity. That being said, preference is preference, a lot of people prefer 75% monitor match or the default Zoom sens in CSGO (plenty of incredible professional players use this). If you are used to and like Viewspeed, then continue to use it, but it's there is no "magic" to it, it's just another way to scale your sensitivity to different FOVs (it's actually rougly the same as 75% monitor match if I recall).

Math wizards with much more knowledge on the subject and calculations than me have determined 0% to be the best for muscle memory so that is what I use and it's also what I would recommend.

Link to comment
  • 9 months later...

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...