Jump to content

FOUNDRY

Just added!
Read more...

Gray Zone Warfare

Use the config file for best accuracy.
Read more...

World of Shooting

See the game notes for instructions on how to disable smoothing.
Read more...

Robin Hood - Sherwood Builders

See the game notes for instructions on how to reduce smoothing.
Read more...

Gas Station Simulator

See the game notes for instructions on how to disable smoothing.
Read more...

Perceived sensitivity


Recommended Posts

First, some background: We set our sensitivity such that a certain amount of mouse movement will result in a certain amount of soldier movement. In it's simplest form, we have our hipfire at some cm/360.
If this were real life, when we use a magnified optic, that means that our movement is magnified accordingly. This is why shooters take such measures as controlling their breathing and heart rate. In game, where magnification amounts to a reduced FOV, this is very unwieldly. Having the same cm/360 at all levels of zoom, just doesn't work. As we zoom in, the sensitivity becomes impractically high - it feels too fast. Muh feels.
So, we can use some formula, to reduce our sensitivity in proportion with reduced FOV. The math is fairly straightforward in this case, it is what we here refer to as "0% monitor matching". So, we try this.... and it doesn't feel right either. Muh feels.
So, we can try to make our 3D game respond as though it were a 2D space like our desktop. This is obviously never going to work because the 3D game world is NOT in 2D no matter how hard we try to treat it as though it is. So, sometimes, it feels right, and other times, not so much. Muh feels.

Now, 'muh feels' is a term which often carries negative connotation. The implication of saying 'muh feels' is that the person is ignoring 'muh reals' - ignoring reality in favour of their subjective sensation. I want to state very early on here, that such an attitude is not the point of this thread. On the contrary, it is clear to me that there is a strong reasoning for the way that sensitivity changes across FOV changes are perceived - and this is the golden word here - Perception.

These 'muh feels' are not coming from nowhere. Yes, there will be some psychological effects like placebo and memory and many many others, but I don't think that any of us could put all of our observations down to just these effects. What we do know, is that the human brain performs a great deal of work on the image captured by our eyes, to determine relevant data such as distance, angle, size of objects in the game. It should be clear to us all by now, that this image processing performed by our brains, is having an effect on the way our movement feels in-game - otherwise, 0%MM would just work.

In other threads, a great deal of work has been done to find a formula which 'feels' right. Just as much of that work has validated previous formulae, I hope that this thread will do the same. However, the intention of this thread is to take a different approach to finding that 'right' formula. Rather than a process of elimination - in other words, trying a formula, and adjusting it according to 'muh feels', until it 'feels' 'right', I want to take a look at WHY it doesn't 'feel' 'right'. I believe that a more scientific approach will be beneficial as a counterpart to the scatter-gun approaches which we have used in these other threads.

I also want to be clear that I am by no means an expert on this subject. Human visual perception is kinda rocket science and I warmly welcome any input on this. However, the intention here is not to simply say "it feels too slow", or "it feels too fast", but to figure out WHY. The first step in solving any problem, is to define that problem clearly (Every properly defined question contains it's own answer), and to the best of my knowledge, this has not yet been done.

I've collected a fair few documents and videos as a basis for this, and (largely because I have too many tabs open in my browser) I think it's about time I posted these links here for future reference. I hope you might take the time to look these over and give it some thought, and perhaps you might even have some information you could add to the discussion - I need all the help I can get :)

The next post in this thread will be a 'library' of links that we can reference, which I will update over time as more information becomes available, and the following post will be a brief overview of what I've been able to derive from that information, which appears to be related to our perception of the 3D games projected on our 2D displays.

Link to comment

Library

http://www.tawbaware.com/projections.htm
http://www.tawbaware.com/ptassembler_mod_rect_projections.pdf
 

Quote

Terminology
In this document, the following symbols are used:
λ=yaw=longitude
Φ=pitch=latitude

In cartographers' terms, yaw is referred to as longitude, and pitch is referred to as latitude.

Rectilinear projection is defined as follows:
x=tan(λ)
y=tan(Φ)/cos(λ)
PTAssembler's Modified Rectilinear Projections
These equations describe how yaw and pitch (longitude and latitude) values are mapped to
corresponding x and y coordinates on a 2 dimensional map or image.  As yaw (longitude) increases,
values along the x axis become increasingly stretched

http://mathworld.wolfram.com/GnomonicProjection.html
- So why is this formula different to the above? Because we're not doing a map projection of the globe. Turns out that the same terms are used both in cartography (maps) and photography - the latter of which is actually relevant to us.

https://en.wikipedia.org/wiki/Rectilinear_lens
- Note the image here, which shows us why e don't want to avoid the image distortion present in rectilinear projection. That fish-eye effect (known as 'barrel distortion') means that straight lines are no longer straight. This would be very bad in a shooter for example. Speaking for photography and its relevance to us, and distortion...


https://en.wikipedia.org/wiki/Distortion_(optics)
- but why? Let's look at how the machine creates the image... This can get pretty nerdy and isn't mandatory for understanding of this subject so feel free to skip the computer-stuff. It's the brain stuff that's important here.


https://msdn.microsoft.com/en-us/library/windows/desktop/bb147302(v=vs.85).aspx?f=255&MSPPError=-2147217396
https://en.wikipedia.org/wiki/3D_rendering
https://en.wikipedia.org/wiki/3D_projection
https://en.wikipedia.org/wiki/Field_of_view_in_video_games
https://developer.valvesoftware.com/wiki/Field_of_View


Of course I should show these excellent examples provided by forum members (guys please take credit here!)
HN5GcL3.jpg

fsdfdsf.png

And now we get into the real meat and potatoes of this thread, how this effects our perception of the image. Food for thought, let's start with Escher.

Escher_Waterfall.jpg

https://en.wikipedia.org/wiki/Optical_illusion
https://en.wikipedia.org/wiki/Perspective_(graphical)
http://artsygamer.com/fov-in-games/
https://en.wikipedia.org/wiki/Angle_of_view#Cinematography_and_video_gaming
https://en.wikipedia.org/wiki/Focal_length
https://en.wikipedia.org/wiki/Normal_lens#Perspective_effects_of_short_or_long_focal-length_lenses
https://en.wikipedia.org/wiki/Perspective_distortion_(photography)
https://en.wikipedia.org/wiki/Angular_diameter
https://en.wikipedia.org/wiki/Perceived_visual_angle
https://en.wikipedia.org/wiki/Depth_perception

And really, if you only read ONE of these links, this is the one I feel would be most pertinent:

https://en.wikipedia.org/wiki/Forced_perspective

Look at the CSGO image above, look at the Odessa Steps....

Edited by CaptaPraelium
Link to comment

On rectilinear projection, distortion, and forced perspective:
 

So, we know already that a perfect mathematical scaling between FOVs is '0% Monitor Matching':
 ZoomLevel = tan(HipFOV/2)/tan(ZoomFOV/2)
 HipFOV = 2*(arctan(ZoomLevel*(tan(ZoomFOV/2))))
 ZoomFOV = 2*(arctan(tan(HipFOV/2)/ZoomLevel))

So, let's think about what is actually contained within those images, which influences our perception of the game world, and causes our mouse to require a different scaling algorithm. We can start by looking at the formula for the projection:

x=tan(λ)
y=tan(Φ)/cos(λ)

There are two obvious facets to this. One, is the distortion by the tangent of the angle. The other, is the reduced distortion on the vertical axis, caused by the division by the cosine of the horizontal angle. It is often said that rectilinear projection results in horizontal stretching, but we can see that in fact, it is that both axes are stretched, just, the vertical axis less so. Whatever - the result is that the horizontal axis is more stretched.

But, this isn't a stretch that happens evenly across the entire monitor. The stretching distortion will always be at it's minimum at the centre of our screen, and increase from there outward. This is why attempts to treat the game as we would a 2D space, can never work, because a 2D space like our desktop, is evenly stretched from edge to edge. This is why we have n% monitor matching, and why battlefield uses a coefficient, and why CSGO also uses a (fixed at 133% aka 4:3 aspect afaik) coefficient - these are attempts to make such an approach, function - and they will succeed, but ONLY at that exact percentage/coefficient value.


So, we have two considerations here which will effect our perception - Increased horizontal stretching compared to the vertical, and increased stretching on both axes, as we diverge from the centre.


I will touch on the latter effect only briefly for now - and I'm not shy to say that is because I really don't know how our brains perceive this. The only information I have come across thus far, was this comment by @TheNoobPolice over on the Battlefield forums:

https://forums.battlefield.com/en-us/discussion/comment/532023/#Comment_532023

Quote

we realised our brains don't give us a sensitivity sensation based on one part of the screen, rather an average across the screen, and this might be different for different people.

I'm confident that further study of the effect of distortion on our perception will lead me to more solid conclusions, but any further input on this effect would be greatly appreciated.


However, much is known about the former effect, the increased stretching in the horizontal plane. This results in an effect known as "forced perspective". You can read all about it from the links in the 'library' post above this, but the basics of it are that the objects appearing wider than they do long, causes our brains to perceive them as being further away.
I think it's worth having a read of the wikipedia page on this effect: https://en.wikipedia.org/wiki/Forced_perspective

Let's take a few images from that page to demonstrate how this would effect us in-game.
SanSatiroInteriors4.jpg
Let's imagine that we are standing on the right side of this room, looking along the pews, toward the left side. Now, we want to move our aimpoint to the back of the dais there and look at the painting of Jesus. We will see that *angular distance* (remember this term!), let's say the dais is, what, a few metres deep? So we turn to our right accordingly, and we are now looking at our saviour..... Or are we?
SanSatiroInteriors3.jpg

Hold on... that's not a few metres deep. It's an illusion, it's maybe ONE metre deep, if that! But it looked MUCH deeper than that, and we turned accordingly. The illusion has caused us to turn too far. This sensitivity formula is too fast! "MUH FEELS!" XD
Let's do it again:
Trier_Konstantinbasilika_BW_4_zurechtgez

What's the distance from the big golden cross, to the red-covered lectern on the left there?
Trier_Konstantinbasilika_BW_2_zurechtgez
How about now? MUH FEEEELZ!


Now, these examples may not be entirely analogous to the view we have of our game world on our screens, but I'm sure that it is apparent to you by now, that the angles we see, effect our perception of distance, and conversely, the distances we see, effect our perception of the angles. This is probably a good moment to point you toward https://en.wikipedia.org/wiki/Angular_diameter and https://en.wikipedia.org/wiki/Perceived_visual_angle

Fortunately for us, we are not at the whim of a clever church architect playing tricks with our mind. We know exactly how far the distances on screen have changed - it is the ratio between the distortion at the two FOVs. Accordingly, we can write formula which allow for this effect.

Please, pitch in guys. Share your thoughts, share your fomulas :)

Edited by CaptaPraelium
Link to comment
  • 2 weeks later...

Crickets... Oh well, I'll go deeper. That should make things much worse. XD First, more food for thought. Do the lines appear to change in length?

Sarcone%E2%80%99s_Pulsatin_Star_(Dynamic

Some will say it looks like they are, some will say they don't.

Last post, I wrapped up by talking about the amount of distortion at our two FOV's... In other threads, up until this day, there has been debate about which FOV should be the basis for the formula - vertical or horizontal. Over in the https://www.mouse-sensitivity.com/forum/topic/720-viewspeed-reworked/ thread, it was pointed out that the projection has the effect of using a set VFOV, and then 'filling out' the horizontal plane, to fill our monitor. This could imply that we should be using the VFOV, since HFOV varies with monitor aspect ratio. This makes sense, seeing as movement from one point to another that is within the VFOV limits, even in the horizontal plane of the world, should require the same amount of movement of the mouse, regardless of how much peripheral vision extends to the sides of the monitor - in other words, given the same VFOV setting, aiming 1cm to the left should feel the same no matter if we have a 16:9 or a 21:9 or a 32:9 monitor ...

But that assumes a positive aspect ratio (monitor wider than tall). What if we consider the centre monitor in one of those cool triple-vertical-monitor setups? Better yet, what if we are in an aircraft/spacecraft, and roll it to the side? Our monitor vertical FOV is now in the horizontal plane of the game world, and vice versa, and the amount of distortion in those planes does not vary. Only the visible amount of distortion in the projection to our monitor varies. So, we can just use the vertical FOV again, since it is always the angle which provides the constant amount of distortion no matter the aspect of our screen. However, if we ignore the distortion in the horizontal aspect of the monitor, we are ignoring the very effects upon our perception which render 0%MM ineffective. Consider the aforementioned effect of the Odessa Steps. This same effect would be just as apparent, if we turn our head to the sides. Then again, if we use the horizontal distortion as a basis, we are now including distortion which is not always visible, and in the most common case of a positive aspect ratio monitor positioned horizontally (read: not a vertical screen or a rolled-sideways spacecraft), this distortion is usually not visible.

On 02/01/2018 at 1:57 AM, CaptaPraelium said:

our brains don't give us a sensitivity sensation based on one part of the screen, rather an average across the screen, and this might be different for different people.

Indeed, it turns out that this is different for different people. Current studies suggest that the difference is caused by ...wait for it... eye pigmentation. Yep, the colour of your eyes makes a difference here. Does this mean we need some kind of coefficient to allow for blue vs brown vs green eyes? Well, fortunately not. See, the different eye pigmentation does appear to effect our perception of the distortion (see https://en.wikipedia.org/wiki/Müller-Lyer_illusion for more info), and other effects have been cited as the reason for the illusion, but all agree that the perception is uniform in manner. This effect does not vary between vertical and horizontal or any diagonal inbetween. This can be seen in the cool animated image at the top of this post. Whether you see growing/shrinking lines, you'll see the same on all of them. Why is this important? Because it tells us that we do not need to account for the differences in individual perception of the distortion. We need only account for the differences between the distortion in each axis.

So, once again, analysis of the sciences tells us that the path to an answer, lies in a ratio between the distortion in each FOV. But to what extent do we measure that FOV? The monitor? A square? the vertical square or the horizontal? Maybe a circle?........ And how do we measure the distortion, since it's not the same all the way to the edges but increases as we diverge from the centre?

But that's a topic for another post on another day. I have badguys to pewpew ;)

 

Link to comment

OK badguys successfully pewpew'd, time to get my nerd on again, I guess.

As usual, let's start out with an image as food for thought.
projections_fed_cyl_150.jpg

This is an image of the US Federal Reserve building, taken from the http://www.tawbaware.com/projections.htm#proj_rect website. It is 150degrees horizontal FOV, and displayed in a cylindrical projection.

As a brief aside, this image shows us why rectilinear projections are good in game, and why we want the distortion in our image. Consider two targets on the roof, one in the middle of the roof, and one at the far left side. Now consider two targets on the bench where the red line is, again at the centre and on the far left side. The movement from the centre target to the side target, in both cases, is the same. But the guys on the roof, it would appear that you'd have to use a different amount of vertical mouse movement since the roof is curved. Keeping straight lines, straight, is important to us when we're moving from point A to point B in a straight line....and keeping straight lines, straight, is what rectilinear projection does.

So, let's see the same image, in rectilinear projection:
projections_fed_rect_150.jpg

Note the excellent visualisation of the perception effects discussed above. Note the apparent change in distance between the trees and the building, between the camera and the building and the camera and the trees, etc etc....

Same 150degrees HFOV. Same Width. But you can notice that the divisions of the white lines (looks like they're every 10 degrees) are the same count.
But there is some distortion here. They are not only stretched out at the edges, but compressed at the centre.
Same roughly (based on the white lines) 80degrees VFOV.. But we can instantly see that the height is much, much less.... and here's the important part - notice that the VFOV is the same... ON THE BLUE LINE. Now look at the edges of the image. What's the VFOV there? Maybe 30 degrees (going by those white lines...we can math that out in a bit)

So...what. Let's consider the formula provided by tawbaware, for rectilinear projection: (and I'm not doing greek letters any more you nerds)

 x=tan(yaw)
 y=tan(pitch)/cos(yaw)

Let us keep in mind, that for our considerations, we are measuring angles from the centre of the image, not from the bottom left corner. So, our formula for calculating VFOV is so:

 VFOV = 2*arctan(tan(HFOV/2)*Height/Width)

We use the 2* and the /2, to account for measuring from the centre (half way into the image) And we take our screen aspect ratio height and width measurements, to keep everything in the correct aspect ratio. Cool. So, to oversimplify by ignoring these considerations, we have VFOV=tan(HFOV). So, does that match with the projection formula? Well, it does. IF we are assuming that we are only measuring at the centre of the screen, where the blue line is, where yaw=0:

 y=tan(pitch)/cos(yaw) =
 y=tan(pitch)/cos(0) =
 y=tan(pitch)/1 =
 y=tan(pitch)

But, what about when we measure at the EDGE of the screen? For that, we need to use our formula for calculating HFOV:

 HFOV = 2*arctan(tan(VFOV/2)*Width/Height)

Now we need the cosine of that, but let's remember we are measuring from the centre, so it is:

2*arccos(cos(HFOV/2))

And now we can punch that into our formula for calculating VFOV:
EdgeVFOV= 2*arctan(tan(HFOV/2)*Height/Width) / 2*arccos(cos(HFOV/2))

We can simplify here because we don't need those 2*/2*:
EdgeVFOV= arctan(tan(HFOV/2)*Height/Width) / arccos(cos(HFOV/2))

By the way, since I said we would math it out, that image is 804x178 and 150HFOV. Using the formula I have here, that works out to a HFOV of 78.13, and an EdgeHFOV of 30.22. Pretty close guess!

@Skwuruhl Please feel free to pitch in ... Please be gentle, this is not exactly my field of expertise. I could use your help! :)



Regardless of any screwups I've made in the math(which are all too likely, it's been a long time since I learned or used any of this kind of math), you can see my point here. 0%MM takes into consideration the distortion along the axes of our image, which are those used to define the limits of the FOV on those axes, but it does not take into account the distortion as we move away from the axes, specifically as we move away from the y axis, and as discussed above, that distortion, is the reason why we perceive the image differently.

I'm going to take a break here, and allow people to double check my math, before I move forward in my concept with more math based upon this post. In my last post, I ended by saying:

On 14/01/2018 at 7:24 PM, CaptaPraelium said:

So, once again, analysis of the sciences tells us that the path to an answer, lies in a ratio between the distortion in each FOV. But to what extent do we measure that FOV? The monitor? A square? the vertical square or the horizontal? Maybe a circle?........ And how do we measure the distortion, since it's not the same all the way to the edges but increases as we diverge from the centre?

And I'm obviously not done answering those questions. This post has mostly been about answering the last of those questions, "How do we measure the distortion". What we have observed here, is that the horizontal distortion is constant at tan(yaw) regardless of vertical angles, but mostly, this post has been about measuring the distortion in the vertical plane, and how yaw effects that distortion.

Next up, we can look into how the now-defined distortion is effected by aspect ratio. This will tell us, what we should be using to calculate sensitivity, ie the first question "To what extent do we measure that FOV? The monitor? a square? a circle? or perhaps a diagonal?".... There's much discussion about that over in the viewspeed reworked thread, so I hope that this will tell us outright, which is the best way.

Link to comment
8 hours ago, CaptaPraelium said:

2*arccos(cos(HFOV/2))

Just something more obvious I noticed, this is the same thing as hFOV. Trig functions don't always cancel out like this but in this case they do.

But this would mean (assuming your math is right) that the true vertical FOV at a given angle distance from the center is

true vFOV = vFOV / Θ

79.13/150 isn't 30.22 so I assume you made a typo in the equation somewhere in your post. I doubled checked this by plugging in without simplifying and got the same error.

Regardless if I set up an equation to find at what true vFOV the y coordinates intersect for 0° yaw from the center and 75° yaw from the center like so: http://www.wolframalpha.com/input/?i=tan(x°%2F2)%2Fcos(150°%2F2)+%3D+178%2F804+tan(150°%2F2)%2Fcos(0°%2F2) note: tan(arctan(c*tan(Θ))) is the same as c*tan(Θ)

Alternatively a version in terms of screen (image) distance: http://www.wolframalpha.com/input/?i=tan(x°%2F2)%2Fcos(arctan(1.0+*+tan(150°%2F2)))+%3D+9%2F16+*+tan(150°%2F2)%2Fcos(0°%2F2)

I get a true vFOV of 24.14° at the very edge of that image.

Edit: I'm not positive I set the equation up correctly but I think I did.

edit 2: okay I'm pretty confident that I did.

 

All that said:

8 hours ago, CaptaPraelium said:

0%MM ... does not take into account the distortion as we move away from the axes

If this were true then in this image only the axes of the ADS image would line up with the hipfire image, and then as you got away from the axes it'd stop lining up as much. Now if the crosshair weren't in the center of the screen then 0% wouldn't work because then the zoom ratio wouldn't simply be tan(ads/2)/tan(hip/2). Luckily that isn't the case. As an aside zoom ratio or something is more accurate name than 0% MM because while it is technically what mm is equal to as distance approaches 0, it's more immediately the zoom ratio.

8 hours ago, CaptaPraelium said:

specifically as we move away from the y axis

The same distortion actually happens on both axes, it's just way less noticeable on the y axis because you very rarely have a vertical FOV of anything higher than 75°. The distortion is more or less aspect ratio independent. We just get more of it on the x axis because our x axis is longer. If you had a 1:1 screen the distortion would be equal on both axes. It's not tied to aspect ratio so much as how much screen you have in a given direction.

Edited by Skwuruhl
Link to comment
13 hours ago, Skwuruhl said:

If this were true then in this image only the axes of the ADS image would line up with the hipfire image

 

But the ratio (divided by cos(yaw)) is constant.
Every formula I've ever come across,. shows that the X axis is treated differently to the Y axis.
Do we have any evidence that X and Y axes are treated the same?
BTW, typos are very likely. I'm doing the formula in libreoffice and constantly converting radians to degrees and back.... It's annoying AF.

Link to comment
On 1/21/2018 at 7:25 AM, CaptaPraelium said:

But the ratio (divided by cos(yaw)) is constant.
Every formula I've ever come across,. shows that the X axis is treated differently to the Y axis.
Do we have any evidence that X and Y axes are treated the same?

For the purpose of zoom amount they are.

Consider cameras, zoom is the ratio of focal lengths. If you go from 25mm to 50mm you will have 2x zoom. When a camera projects it's image using rectilinear projection the relation between fov and focal length is 

fov = 2*arctan(h/(2f))
or
h/(2tan(fov/2)) = f

Where h is the length of the lens and f is the focal length. This is true regardless of which axis you measure on. x, y, diagonal, it doesn't matter as long as you use the same axis for the ratio. From here take the ratio between 2 focal lengths.

h/(2tan(fova/2)) / (h/(2tan(fovb/2))) = fa/fb

simplify

1/tan(fova/2)*tan(fovb/2) = fa/fb

tan(fovb/2)/tan(fova/2) = fa/fb

http://www.wolframalpha.com/input/?i=h%2F(2tan(a%2F2))%2F(h%2F(2tan(b%2F2)))

And thus we have the way to find zoom amount of 2 rectilinear images given the FOV of each.

Edited by Skwuruhl
Link to comment

Yes, we can see that they are the same for the purpose of measuring the zoom, by your overwatch image I quoted right at the top of this thread, but the point of this thread, is that the distortion of the image has a perceptual effect on our brain (for an unrelated example high FOV makes you seem like you are running really fast) and accordingly there is more to the way we perceive the shift in FOV, than just the zoom amount. This is why it is important to understand the distortion fully, so that we can analyse the effects that such distortion would have on our perception.

The reason I asked if the axes are treated the same, is that all of the formula I had found for rectilinear projection, imply stretching in the X axis which is greater than the stretching in the Y axis. As you say, this has no bearing on zoom amount, because as I said, the stretching is constant anyway (such as you've mentioned the same formula for zoom ratio will apply, even for the diagonal fov).... But, you have said that the stretching is in fact equal on both axes, and you are the first I have heard suggest this. We have both said in this thread, that the effect of having more distortion visible is related to the aspect ratio of the monitor, but there's more to it than that, as visible in this image where the aspect is vertical:

GnomonicProjection_700.gif

You are taking that a step further and saying that there is no additional stretching in the X axis, when compared to the Y axis, regardless of what is visible.

I asked if we have any evidence of this (because like you, I am attempting to avoid any pseudoscience here) and the funny thing is, I think I may have already posted it. As I've discussed above, the majority of literature available regarding rectilinear projection, pertains either to cartography or photography.... and it seems you've brought me to realise why this is - because the projection method we are using on computers, is not generally referred to as 'rectilinear projection'. Yes, it is "rectilinear" projection, in the sense that straight lines in the 3D world are represented as straight line in the 2D projection, but in the computer graphics field, the term I'm seeing more widely applied, is 'perspective projection'. This naming is to distinguish what we see in-game, where more distant objects appear smaller (as in real life), thus creating a sense of depth to the image; from 'parallel projection', which is used in computers for such tasks as CAD modelling, where the length of the lines must also remain intact, and this effect is achieved by means of mapping the 3D coordinates via parallel lines onto the 2D projection.

As in, ALL of these are rectilinear projection:

1024px-Graphical_projection_comparison.p
Only the first one applies to us.

The distinguishing features here, are not only that straight lines are projected as straight (ie, it is rectilinear), but that parallel lines are not projected as parallel, rather, they represent a vanishing point (ie, perspective). So, now that I have the correct terminology, it's been much easier to find information regarding the nature of the projection....Well, I say 'find', but I should say 'found', because I already came across it. Some of it is even pasted above, and some I left out as it seemed to be irrelevant at the time. Now I know better, let's sort that out with some link spam:

http://www.significant-bits.com/a-laymans-guide-to-projection-in-videogames/
https://en.wikipedia.org/wiki/3D_projection#Perspective_projection
https://en.wikipedia.org/wiki/Camera_matrix
 

https://en.wikipedia.org/wiki/Perspective_(graphical)#Three-point_perspective
https://en.wikipedia.org/wiki/Graphical_projection#Perspective_projection
https://www.math.utah.edu/~treiberg/Perspect/Perspect.htm
https://www.cse.unr.edu/~bebis/CS791E/Notes/PerspectiveProjection.pdf
https://en.wikipedia.org/wiki/3D_computer_graphics
https://en.wikipedia.org/wiki/3D_rendering
https://www.youtube.com/results?search_query=perspective+projection+in+computer+graphics just in case anyone prefers to learn visually, there are plenty of videos about this.

TL;DR, the X and Y axes are treated the same way. All of this is actually good news for me as not only does it answer the question as to why the image behaves as it does when the camera is rotated around the Z axis (specifically, is the X axis fixed to the camera or the world horizon - answer: it doesn't matter), but it also negates any concern of unequal stretching between the X and Y axes. The illusory effect of forced perspective still comes into play here, but we are now down to a simple factor to consider - perceived visual angle, which is determined not only by the FOV of our image, but as skwuhurl has touched on, focal length... Or in the case of a computer monitor, We're talking about size of, and distance from, the screen.

Valve touch upon this here:
https://developer.valvesoftware.com/wiki/Field_of_View#Optimization and I have briefly touched on the effect of angle of view, perceived visual angle, etc, in the posts above.

Essentially what all of this implies, is that in addition to considering the ratios between two zoom levels, aka fields of view on screen, we should consider that those zoom levels are in fact ratios of another field of view - that of the screen itself. There is an easy way to demonstrate this, so I'll cough up drimzi's image again:


fsdfdsf.png

I don't know what FOV this image is (Lil help @Drimzi?) but suffice to say, it's big. The wall at the end looks tiny, and the signs on the left wall look huge. Right? Well......
Open this image fullscreen, and while you stay looking at the wall at the back, get your face right up against the screen. I mean, if your nose touches the screen, you're doing it right. (OK maybe not quite that far. but as close as possible) Notice the green sign doesn't look so distorted any more? That's because of the angle at which we're viewing it.

I imagine the formula which takes this into account, would essentially be a 'screen to zoom ratio, to screen to zoom ratio, ratio'. And if that name didn't make you giggle you're taking this all far too seriously :)
Basically what I have in mind is:

Take screen dimensions (pixels) and size (inches/cm diagonal)
Take distance from screen to eyes
Do basic trig to find FOV of screen
Use existing zoom ratio formula  ScreenZoomRatioa = tan(FOVa/2)/tan(ScreenFOV/2) to find ratio between screen and first FOV (say, hipfire)
Use same formula ScreenZoomRatiob = tan(FOVb/2)/tan(ScreenFOV/2)  to find ratio between screen and second FOV (say, 4x zoom)
Then simple ScreenZoomRatioa/ScreenZoomRatiob = PerceivedZoomRatio

I'm throwing these ideas up as I type so there are likely issues and I'm sure that can be simplified... But it's getting late so I gotta wrap it up for now.

I seem to remember @DNAMTE or @potato psoas were already working on a formula which took distance from screen into account. I don't want to reinvent the wheel or steal anyone's thunder here. If you guys have something going on this I'd love to hear about it :) Likewise @Skwuruhl I know you speak this stuff like a second language so if you can whip something up that would probably save me a lot of time. I have a feeling that an 'all day job' for me is a '5 minute break' thing for you hehe
Otherwise I might go ahead and cobble something together sometime in the next couple of days.

Link to comment
14 hours ago, CaptaPraelium said:

Use existing zoom ratio formula  ScreenZoomRatioa = tan(FOVa/2)/tan(ScreenFOV/2) to find ratio between screen and first FOV (say, hipfire)
Use same formula ScreenZoomRatiob = tan(FOVb/2)/tan(ScreenFOV/2)  to find ratio between screen and second FOV (say, 4x zoom)
Then simple ScreenZoomRatioa/ScreenZoomRatiob = PerceivedZoomRatio

that's just tan(FOVa/2)/tan(FOVb/2)

http://www.wolframalpha.com/input/?i=tan(a)%2Ftan(s)+%2F+(tan(b)%2Ftan(s))

In fact you could replace tan(s) with anything and it'd still get canceled out.

Regardless I don't think distance from your screen should affect what sensitivity scaling you use. Like FOV sure, and then by extension sensitivity, but not sensitivity directly.

Edited by Skwuruhl
Link to comment

Skwuruhl is the most knowledgeable in the field of mathematics and firmly believes that the mathematically correct ratio is the correct method to scale sensitivity.
You are the most experienced in the field of pewpew and have constructed outstanding formula based on your experience.
I am a programmer and am following a programmatic method to find a formula, by working my way backward from what we know and experience, to discover what we want to know and experience.


You're quite right that, with experience, and 'getting used to it' (ie, practice), ANY sensitivity will do just fine. Heck, they didn't have any of these configuration options in the 80's and I did just fine blasting pixels at 640x480x16 colours ;) I bet we've all used a cruddy joystick to land on tiny platforms after jumping over some dangerous object with perfect timing. Our brains will take care of it eventually, as we gitgud.


It's also true that the optimal process will always be optimal; if we're going to get used to a scaling method, it makes sense to use that which is correct, and as it stands, that's zoom ratio. That being said, one can't help but wonder whether the reason why zoom ratio feels wrong is just bias (ie, we 'got used to' something else) or if there's more to it. Bias is incredibly powerful and I will not be at all surprised if that turns out to be all there is to it. Still, I am not satisfied with making any assumptions, be they that bias is the critical factor here or that it is not. I have a strong inclination towards skwuruhl's approach of 'math don't lie', but after spending 6 months in an attempt to "suck it up" and "get used to" using 0%MM aka zoom ratio, in hopes of overcoming my bias, I still felt that there was something else at play.... and I'm just that type of guy who has to ask, "Why?".

I'm certainly not an expert in human visual perception, but I've spent a lot of time over the past few months learning as much as I could... and what I've learned has told me that there is no doubt that we all perceive the image differently, as a result of the distortion introduced by the projection. In my most recent posts, I have attempted to account for that distortion. As we know, zoom ratio is the mathematically correct method so scale between zoom levels, but these are not solid rectangles we're scaling, they are filled with an image. That image, is what gives us the perception that our sensitivity should scale at all. As valve put it, " an optically correct perspective can be obtained by matching the Camera's angle of view to the angle between the Player's eye and the edges of the image on his screen. Obviously this angle varies according to the actual size of the Player's screen and how far away from the screen he is actually sitting "

It is a common analogy, to consider that the screen is a 'window' through which we view the game world, and where our angle of view of the screen matches the FOV in game, that analogy holds true... but once those angles differ, the analogy weakens. A more appropriate analogy, would be that the 'window' formed by our screen, is in fact a 'lens'. If we consider it as such, zoom ratio is the correct scaling method for us to use....but there's more to consider. If we were looking at the real world through such a 'lens', like a telescope with a big rectangular eyepiece, using zoom ratio would work just fine, because we would just be zooming an optically correct image into another optically correct image. However, in game, we are not doing this. We are zooming a distorted image.

Consider the design of a zoom lens:

380px-Zoomlens1.svg.png

The three lenses on the left are performing the actual zooming by adjusting the focal length, and the lens on the right focuses that image onto the eye (or film or whatever). For our purposes, zoom ratio describes the result of the three zoom system lenses, however, we do not have an analogous formula for the focussing lens on the right. At present, that is fixed, controlled by our distance from the monitor.

To use another image to explain it.... First, the description of the image:
"A varifocal lens. Left image is at 2.8 mm, in focus. Middle image is at 12 mm with the focus left alone from 2.8 mm. Right image is at 12 mm refocused. The close knob is focal length and the far knob is focus. "

1920px-Varifocal_example.jpg

Zoom ratio, describes the difference between the left and centre images. We do not have a formula to describe the difference between the centre and the right image.

....I'm working on it ;)

Edited by CaptaPraelium
Link to comment

It's probably that people have been playing with 75% match for a very long time (or as it was back then 100% of 4:3). I mean it's more or less how CS 1.6 did it nearly 2 decades ago, it's what people are used to. While I do think 0%/zoom ratio would ultimately match better, if you're that used to 75% then it could take a while to get used to a different sensitivity. Especially with counterstrike players who've basically been using the same sensitivity scaling for more than a decade. It's not surprising that the average zoom sensitivity of pros is basically 1. 

OWL on the other hand has an average of 38.53 which would be like using 0.8477 zoom sensitivity in CS:GO or 18% match distance. The most common sensitivity is 40. Due to Overwatch being a new game with new FOVs and a zoomed sensitivity slider players are probably more likely to adjust sensitivity to their liking. Especially so since unlike CS:GO the default zoom sensitivity is pretty garbage (21% lower than 0% match). Imo that it's gravitated more towards 38 rather than 45 (75% match) says something.

Also as another example:

YfzceCQ.png

Here the enemy (specifically its eye) is ~172 pixels or ~12.7° away from the crosshair. With my sensitivity this means it'd take me ~1.1 cm of mouse movement to flick to the target.

u3Z1otn.png

When I ADS the enemy is now ~455 pixels but still ~12.7° away from my crosshair. Since the enemy is 455/172 or ~2.6 times "further" away it should take a correspondingly longer mouse movement to turn those 12.7° and aim at it. The scalar to do this would be 172/455 = 0.378.

Link to comment

...and as usual I find myself asking, "Why?"......

I mean. we know that in game, that robot seems "further" away, but if this were real life and I switched from irons to a zoomed optic on a rifle, I wouldn't feel like it was further at all. It'd just feel like ~12 degrees no matter.
And, we know that the target is ~12 degrees from the crosshair in both images, but it sure LOOKS like further in the second image. Muh damn feels are totally lying to me :P
Plus, as you've mentioned, when left to adjust the sens according to their feelz, the community has gravitated toward a certain number. Like you say, it suggests something. But what? Why 38?

What are the VFOVs on those two images, skwuruhl?

Link to comment

What I find interesting about this is that everyone says the same....Whether they be from a background of CSGO 75%, COD 0%, or in my case, thumbstick. There's surely more to it than bias alone...
I've got some time to spare today so I might get a formula going and give it a try before I post it here to embarrass myself

Link to comment
On 1/26/2018 at 12:00 AM, CaptaPraelium said:

I mean. we know that in game, that robot seems "further" away, but if this were real life and I switched from irons to a zoomed optic on a rifle, I wouldn't feel like it was further at all. It'd just feel like ~12 degrees no matter.

I don't know that aiming with a physical gun and a mouse/monitor are all that comparable

On 1/26/2018 at 12:00 AM, CaptaPraelium said:

Plus, as you've mentioned, when left to adjust the sens according to their feelz, the community has gravitated toward a certain number. Like you say, it suggests something. But what? Why 38?

What are the VFOVs on those two images, skwuruhl?

51 and 103 horizontal, ~30.04 and ~70.53 vertical, 38 is zoom ratio, 45 would be 75% match.

22 hours ago, CaptaPraelium said:

What I find interesting about this is that everyone says the same....Whether they be from a background of CSGO 75%, COD 0%, or in my case, thumbstick. There's surely more to it than bias alone...
I've got some time to spare today so I might get a formula going and give it a try before I post it here to embarrass myself

https://docs.google.com/spreadsheets/d/1th02m-cuo7mg-aCKPkSRTjJ7IsmCeSBuzSD-gTP1_u4/edit?usp=sharing Filtered for all OWL players that have played Widow or Ana during the tournament (and that settings are known for). It's evidently not everyone.

Something worth mentioning about CS:GO is that if you change zoom sensitivity for one zoom level, it "messes up" all the other zoom levels. If there is no bias then this could account for most pros using the default.

22 hours ago, Drimzi said:

Zoom ratio: Camera zooms in and out; physical mouse movement depends on the zoom, but the sensitivity of the mouse is also scaled by the zoom amount.

What? Nothing you said here is wrong, you just worded it really weird? Mouse sensitivity is scaled by zoom amount so that the physical mouse movement also depends on the zoom.

22 hours ago, Drimzi said:

Monitor distance (Arbitrary distance): Camera zooms in and out; physical mouse movement depends on the zoom, sensitivity is scaled by an arbitrary coefficient, mouse distance to an arbitrary point is always the same. (Most commonly 4:3 match, but new games like PUBG and Battalion are 16:9 match)

Monitor distance (1:1 Match): Camera zooms in and out; physical mouse movement depends only on the zoom (IMO), mouse distance to 1:1 dimensions are always the same (top and bottom for widescreens in landscape orientation).

Monitor distance does not depend on the zoom. Zoom is how much larger or smaller everything appears to get. I can't think of a better description than "it scales to match mouse movements to certain monitor distance," but it doesn't scale by zoom amount. If it did you could factor the zoom out of the equation, but you can't.

Also why not just call it vertical match? People don't play with their monitor in portrait orientation.

Link to comment
7 minutes ago, Skwuruhl said:

I don't know that aiming with a physical gun and a mouse/monitor are all that comparable

Apparently not at all, but that's kinda my point, it makes me ask, why?

If I look at a kangaroo 100m away, then scope in on it, it still looks 100m away, just zoomed in.
If I look at a bot on screen 30m away, then scope in on it, it looks a lot closer. Or, to be more precise about it... when I zoom out, it looks a lot further away than it really is. Especially when I run a higher FOV.
The only real difference in what is happening, is that the image is distorted... and we know now that the only reason the image is distorted - when it hits our eyes, not when it's drawn on screen - is because of our distance to the monitor, and the fact that it never changes when the zoom does.
 

7 minutes ago, Skwuruhl said:

51 and 103 horizontal, ~30.04 and ~70.53 vertical, 38 is zoom ratio, 45 would be 75% match.

Funny, that.
Yeh, there's some swing to it, but there's no doubt those numbers are centred around zoom ratio.


Man I would LOVE to see the monitor size and distance from the monitor for those guys. Have a funny feeling that no such stats exist :(

I've done some fun experiments, regarding the whole 'it looks further' thing. If you move away from your monitor when you zoom, to the 'optically correct distance' as valve put it... it totally negates that effect. I've got some sketches here, but no formula as yet. I'm really close to something useful, so I'll probably post some math tonight.
 

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...