Jump to content

Project L33T

See the game notes for instructions on how to disable smoothing.
Read more...

Twilight Town: A Cyberpunk FPS

Just added.
Read more...

Contain

See the game notes for instructions on how to disable smoothing.
Read more...

Vomitoreum

Just added.
Read more...

Double Action: Boogaloo

Just added.
Read more...

Perceived sensitivity


Recommended Posts

i really like this topic, if this post is polluting this topic, feel free to delete it :)

 

i think that it is very hard to find a perfect formula for every game. Just because games have different mechanics. If we only talk about FPS games, i think these things are relevant:

- there are hitscan mechanics and more realistic mechanics with bullet speed / drop.

- FPS games with recoil and without recoil

With hitscan and no recoil it makes more sense to me using 0mm, since tracking a target is almost always the same in every situation -> put the crosshair on target and track it. Sure, moving from hipfire to a scope will almost never be perfect dead center on a target, so we have to adjust -> but when we put that crosshair on the target the sensitivity should be correct right?

 

With hitscan and recoil things will become a bit harder, we still have the same issue with going from hipfire to scoping, but when we have the crosshair adjusted on target and shoot, the recoil kicks in. Depending on the game this recoil can be little or really much, but it will move the crosshair off target and we have to readjust -> making the sensitivity for a small moment incorrect again right.

 

Now, the issue is when there are realistic mechanics in play, bullet drop and bullet speed. Depending on how far the target is, how fast the target moves in a certain direction, we have to put the crosshair off target and also it can change each shot we take at the target. the target might try to dodge bullets by rapidly changing direction and speed, so in these games we have to compensate for bullet drop, bullet speed, recoil and target movement.

 

first shot might be needed to be shot 2cm left of the target in a 4x scope, after that shot recoil kicks in so our crosshair ends up 3cm higher than before, we have to drag the crosshair down, but in the meantime that target went from full speed running tot the left to a full stop, so we have to move the crosshair to the new location etc etc. In these games tracking a player almost never means you are able to put the crosshair deadcenter on a stable path so to say. in these games there is actually alot of small off target flicks while using a scope on a moving target.

Now with 0mm human errors in compensation of recoil and bulletdrop/speed will add up quite fast imho, compensating recoil might be done wrong, and u have to even adjust way more the next shot.

 

this for me is why 0mm in CSGO with an awp feels great, but 0mm in pubg feels way too slow. it always feels great on static targets though.

it also might be the case that 0mm is still correct, but just feels to slow and needs time to get adjusted to.

Edited by sammymanny
Link to comment

At 0mm you have to scale your recoil control movements proportionately with the difference in zoom. This makes recoil control with a reddot and an 8x scope significantly different, and undesirable, but is something you have to deal with otherwise you compromise everything else in order to somewhat preserve the recoil motions.

Link to comment

because view speed v1 v2 and Monitor distance 0 %, 56.25%, 75%, auto% didn't actually work so you guy are working on finding the new method for calculating sens between 2 different fov, right?

or I  misunderstand about what you guys are doing here.

here is the thing most csgo pro player still use zoom_sens 1.00 which is 75% monitor distance. so Monitor Distance 75%  work for most people, right?

Link to comment
16 minutes ago, kittawat said:

because view speed v1 v2 and Monitor distance 0 %, 56.25%, 75%, auto% didn't actually work so you guy are working on finding the new method for calculating sens between 2 different fov, right?

or I  misunderstand about what you guys are doing here.

Well the other methods work well in their intended functions. This one just does the job of matching sens between FOVs, in a different way.

Think of it like this (I simplify here but you get the idea):
No formula can ever be perfect all of the time. So we have to make some sacrifices, and try to gain whatever we can.
0% is always perfect at the centre of the screen. But it is always imperfect everywhere else.
56.25% is always perfect at the top and bottom of the screen. But it is always imperfect everywhere else.
This formula would be perfect in different places depending on the zoom level. But it is always imperfect everwhere else.
... But, if we add up all the imperfections of the others, and all the imperfections of this, then this one will be the least imperfect.

 

16 minutes ago, kittawat said:

here is the thing most csgo pro player still use zoom_sens 1.00 which is 75% monitor distance. so Monitor Distance 75%  work for most people, right?

Honestly, the most important thing is that a player is accustomed to whatever settings he has. It really doesn't matter what those settings are, if you are used to using those settings, then you will be best with those settings.

Link to comment
1 hour ago, Bryjoe said:

how can you make the ratio perfect in all scenarios?

You can't. It's not possible. But what we can do, is make it as 'perfect' as possible.

We can do this by starting with a formula which encompasses all scenarios, and that's what we call monitor matching. The scenarios could theoretically reach off the edges of the monitor (consider 100%MM on a 4:3 screen, that's off the sides. It's also off the top and bottom edges of a 16:9 monitor), or be constrained within the limits of the monitor, whatever. But we have a formula which tells us what is 'perfect' for any *one given scenario*. This is where we get our monitor match percentage, eg 0% is correct for the scenario where the target is at 0% of the width of the monitor, 50% is correct for the target halfway to the edge, 100% is correct for the edge, etc. It also tells us how 'imperfect' it is for *all the other scenarios*. We find the balance point between right and wrong.

Essentially this will end up with a formula which is no longer "too fast at low zoom" or "too slow at high zoom" or "good for tracking but not for flicks" or "good for flicks but not for tracking" as we have seen previously, but instead it will be "too fast in the middle and too slow at the edge", but the "too fast" and "too slow" will balance out perfectly, so it will be not too fast or too slow, given "all scenarios", and it will be balanced between tracking and flicking, etc. It's the 'Goldilocks' formula. Yeh I think it just got a name XD


So, how can we make the ratio perfect for all scenarios? We can't. Ever. But we can define those scenarios and most importantly we can find the most balanced ratio for them.

Link to comment

Posting this separately because I can't figure out how to do spoilers and honestly I'm not even sure I should mention it hahaha

<spoiler>I'm hesitant to mention this because I want to avoid any kind of arbitrary numbers involved in this, but yes it will be possible to focus the balance on a certain area of the screen (or beyond it). I mention this because, as sammymanny has outlined above, use cases vary. I mean I practically always flickshot...and practically never beyond 50% of my screen height. So do I really want to sacrifice accuracy within that 50%, in order to be balanced with the area beyond it? I don't know. This should also have the interesting sideeffect that if one sets the inner and outer limits to the same amount, we'd achieve the same results as monitor matching.So, using my "never flick past 50%" example, if I set both to 25% and 25%, then I'd get the same result for all scopes as 25%MM. But if I set it to 0% and 50%, it will find a value inbetween those two areas of my monitor, for each scope, which is the most balanced between 0 and 50% of my screen. Like I say, I'm hesitant to introduce arbitrary variables here, but there are arguments, not limited to use-case, for biasing towards certain areas of the screen:

image.png.68d4bf04edbd76b224a0352c760a5494.png

Really though, using bounds of 0% and 100% should work out fine. It already naturally balances toward the centre as a result of the nature of the projection (consider how it stretches the image at the edges). In fact, the results I've seen so far suggest that it converges toward 0% .. while diverging from zoom ratio. That's pretty special. Anyway now I'm diverging from the topic ;)

For now I'm just sticking with vertical 100% but there's also horizontal screen space to worry about. And then there's balancing between them, too, because rectangular screens are definitely a thing. Not to worry, the formula that comes from doing this first part will be applicable to the rest of it.</spoiler>

Edited by CaptaPraelium
Link to comment

What works for me, and you can give it a go as well is the following:

1. Match at 100% Horizontal for each FOV ( preferably on a 16:9 or if the game handles projection by Verical- ) if and only if you are able to "see" the entire FOV ( similar to quake zoom ).
2. Match at n% whilst scoped - based on how much degrees the scope actually allows you to utilize ( without moving the mouse ). Take for instance the 1st zoom level of the AWP in CS which narrows the FOV to 40. Now, if you were to use 100% hor mm in this instance it will feel too fast, but since the information is displayed in less degrees ( due to the scope behavior of "tunneling" the vision ) on your screen, match it instead at the edge of the scope and NOT at the edge of your screen. Think of how impractical a higher than 100% hor monitor match is and correlate it to the scoped FOV which everyone truthfully says - its too fast. "why should I use 100% ver mm if im not able to see the entire information ?".

What can "theoretically" be the best of the worst matches you trynna find, might never be practical for everyone.

 

Have a productive day,
Munty

 

Edited by MuntyYy
Link to comment

What I use and tried to explain perhaps doesn't make much sense in a  scientifically or mathematically stand point or be defined as correct by formulas, nonetheless I advice everyone to give it a chance. 

Edited by MuntyYy
Link to comment
40 minutes ago, MuntyYy said:

due to the scope behavior of "tunneling" the vision ....why should I use 100% ver mm if im not able to see the entire information ?".

Because there's more to it than what you can see on screen. For example, 100% vertical provides you with a division of the FOVs which the game is using to present the information on screen. It's the only static match which is as logically sound as 0%. 0% is zoom ratio, 100% vertical is angle ratio.

A6U8iQv.gif

Link to comment

100% is a lot more arbitrary than 56.25%. 100% depends on how wide your monitor is. Clamping the vertical and expanding the horizontal with the aspect ratio also doesn't affect the focal length. 1:1, 4:3, 16:9, 21:9, 32:9, etc. will all look the same, with extra vision just being tacked on, and all monitors will have wildly different sensitivities using 100%.

 

A screen distance match is only completely true for pure pitch/vertical movement too. Although I could be wrong... the distance could still be true for yaw with a different trajectory.

projections_fed_rect_150.jpg

Pitch will always be a predictable movement, following that straight blue line. Yaw has a different curvature depending on the pitch. Aiming at an incline is going to cause very unpredictable results if you are basing all your movements on a screen distance.

If the concept is only valid for pitch, then anything higher than 56.25% is outside of the screen for pitch, which kind of defeats the purpose of a monitor distance match. But since you most likely don't even perceive sensitivity using screen distances, then I guess none of this really matters any way, a static distance match will always be flawed.

 

Link to comment

I'm going to copy paste an explanation someone gave in a Discord on why a screen distance match is flawed, and why zoom ratio is the only correct way to convert sensitivity. He absolutely despises this website, so will probably never come here to explain for himself.
 

Quote

image.thumb.png.e91eb17ab7fb9b5d0d4f1b6840939bdd.png

If you print this out and make a giant cube and stand at the center of it, it feels like you're looking out of a giant wireframe globe

Version for UE4 Cubemap:

image.thumb.png.99a46c7ee4b6a68fd1de075514a08797.png

Each line are 5 degree increments. The significance of the lines are that it shows how your mouse x movement curves when you're aiming with significant vertical incline. Notice that all vertical mouse movements are geodesics. This means that your vertical mouse movements (pitch) are predictable at all angles, whereas your yaw will "curve" if you're looking straight up or down.

Trivia: the curves on the sideways view are called hyperbolas. Basically it represents what the perspective projection of latitude lines look like when its center lies beyond infinity. If the center of the circle lies at exactly infinity, it becomes a parabola. If it lies below infinity, then it's an ellipse.

What do I mean by lying on or beyond infinity? well if you cut the globe right through the center, it represents an 180fov image (technically impossible, infinitely large image). If the tip of the little circles happen to intersect that plane, that's what I mean when I say that it lies at infinity since the position of that tip is projected at the infinite distance of the 180fov image.

If the point lies beyond the 180fov image, then it becomes a hyperbola.

This is a concept called eccentricity, describing how far apart the two virtual tips of the original circle that these conic sections are.

A circle has its centers at the same point. An ellipse have their center at a fixed distance apart.

A parabola is is basically taking those two centers and moving it to exactly infinite distance away. (technically they aren't actually centers, they're what's called focii of the conic sections)

image.thumb.png.581e26f889d2364aef4a9bc3b977857f.png

So when you aim horizontally in OW, your mouse yaw generally follows the shape of a hyperbola on your screen, or rather the path it takes to get to a target on low ground or high ground.

when you're super vertical it becomes a noticeable parabola and even an ellipse.

This is why attempting to set your zoom sensitivity to "match the distance it takes to reach a certain point on screen" is fundamentally wrong on principle because the eccentricity of your aim depends solely on the current pitch. It does not scale with your screen projection at all.

Only scaling the differential distance holds true since it's fov invariant, whereas the "USA system" in Battlefield is dependent on an assumption of invariance of motion to move to a certain position on screen, but the eccentricity doesn't scale with screen proportion or field of view. The parametric path is therefore incorrect as soon as you deviate from eccentricity of infinity, i.e. perfectly horizontal.

 

Since absolute displacement on the parametric path of relative object movement in your camera is not invariant of screen projection, any standard that attempts to equalize based on it will be incorrect.

However, the differential parametric displacement is preserved, because that's literally the definition of differential elements, therefore only tan scaling is a valid quantification of zoom sensitivity and any deviation from it must be explicitly described in the framework of a deviation from the tan scaling or as the absolute scaling of sensitivity.

 

Link to comment
3 hours ago, Drimzi said:

100% is a lot more arbitrary than 56.25%. 100% depends on how wide your monitor is.

Exactly this^ You'll notice I almost always give %ages based on the vertical aspect. A sensitivity change based on FOV angles changing, should not be effected by whether you use an ultrawide monitor or not. As I've mentioned before, what if we turn the monitor sideways? It happens every time I roll my plane..... Should my sensitivity change then? of course not.


I really like the 2nd post there. I've covered a few times in this thread that we should not necessarily be concerning ourselves with what is projected onto the screen, and given the example of rolling a plane or spacecraft, to demonstrate that what's on screen and off screen are not determined by only FOV but also by the perspective of the player. I posted the gif above to demonstrate that what's visible in the scope isn't really important, but it also gives you a hint that I like not only to fly, but to shoot at planes, and sometimes they are quite high overhead, and the effect you just described is all too familiar to me. If a plane is directly overhead it's like your horizontal sensitivity has been turned wayyyyyy down. a 360degree horizontal rotation is some 3+ times my screen width at the horizon, but looking 'straight up' (in quotes because the soldier can't look all the way straight up), it is a circle just a few inches wide.

It has to be this way, or rather we choose to program it this way, because the alternative is to keep the lateral movement rather than horizontal, and if we imagine an example where we are looking *up* at a rooftop, and we move our mouse directly to the left, we would now have the effect of moving left *and down* in the game world.

This is one of the reasons why my formula operates by division of angles and not by screen distance. As I mentioned above (in my "I don't know how2spoiler"), there is also the opportunity to extend the boundaries of the formula beyond the edges of the screen, and we have that in the geogebra graphs posted earlier, extended to infinity, it's the purple line. Just as you have encouraged the use of the term "zoom ratio" for 0%MM, I have encouraged the use of the term "angle ratio" for 100% vertical. After all, the result is exactly the same as the ratio of the FOV angles.

In previous posts ITT, I've looked at how we actually judge the size of, or distance off centre to, an object/target - in other words the distance between two points. We do this by means of angles and depth perception, which makes a lot of sense, I mean, how else could we possibly do it? We have a rotating eyeball and we can measure the rotation of it in order to ascertain the angle between two points, and we have the effects of perspective to indicate depth, so we can do that trigonometry and our brain does it for us instantly. However, as we zoom the image on screen, we alter the perspective effects, thus altering our perception of depth, thus our determination of the distances between points, and that is exactly why we feel a need for a change in sensitivity. So, if our perception of these things is based on angles, there's a strong argument for using only angles to calculate sensitivity. 0% and 100% vertical 'monitor matching' both do this, in their respective guises as zoom ratio and angle ratio.

So, having excluded the validity of monitor matching, why might we consider one or the other, or neither, of zoom or angle ratios? Because for all the math in the world that might be super correct to a machine, we are human animals with brains that lie to us, largely because our monitor is lying to us. We all know that zoom ratio works on paper and we have all felt that it feels wrong in reality. Likewise simply dividing the FOVs. Neither of them actually work because we are always distorting the image projected onto the screen, thus always distorting the angles which our brains use to determine the required amount of movement.

Our perception of the projected image is a real thing and we can't just ignore it for the sake of pure mathematics.

Link to comment
6 hours ago, Drimzi said:

So when you aim horizontally in OW, your mouse yaw generally follows the shape of a hyperbola on your screen, or rather the path it takes to get to a target on low ground or high ground.

when you're super vertical it becomes a noticeable parabola and even an ellipse.

This is why attempting to set your zoom sensitivity to "match the distance it takes to reach a certain point on screen" is fundamentally wrong on principle because the eccentricity of your aim depends solely on the current pitch. It does not scale with your screen projection at all.

 

Capture.JPG

fada.JPG

jjfs.JPG

Slowly I'm starting to understand why 100% monitor match ( on paper ) its flawed. Thus, as you move your mouse up or down, the horizontal "perspective" also changes. From a hyperbola to a parabola and then an ellipse. But are you aware of it in the game world ? Isn't your horizontal movement the same at the cross section as it is at any degree ( up or down )?   

Ignore my question, its foolish. The distance on screen can't be the same, as your mouse will follow the path of a parabola/ellipse or even circle at the very extremities of the projection.

Edited by MuntyYy
Link to comment
2 hours ago, CaptaPraelium said:

Exactly this^ You'll notice I almost always give %ages based on the vertical aspect. A sensitivity change based on FOV angles changing, should not be effected by whether you use an ultrawide monitor or not. As I've mentioned before, what if we turn the monitor sideways? It happens every time I roll my plane..... Should my sensitivity change then? of course not.


I really like the 2nd post there. I've covered a few times in this thread that we should not necessarily be concerning ourselves with what is projected onto the screen, and given the example of rolling a plane or spacecraft, to demonstrate that what's on screen and off screen are not determined by only FOV but also by the perspective of the player. I posted the gif above to demonstrate that what's visible in the scope isn't really important, but it also gives you a hint that I like not only to fly, but to shoot at planes, and sometimes they are quite high overhead, and the effect you just described is all too familiar to me. If a plane is directly overhead it's like your horizontal sensitivity has been turned wayyyyyy down. a 360degree horizontal rotation is some 3+ times my screen width at the horizon, but looking 'straight up' (in quotes because the soldier can't look all the way straight up), it is a circle just a few inches wide.

It has to be this way, or rather we choose to program it this way, because the alternative is to keep the lateral movement rather than horizontal, and if we imagine an example where we are looking *up* at a rooftop, and we move our mouse directly to the left, we would now have the effect of moving left *and down* in the game world.

This is one of the reasons why my formula operates by division of angles and not by screen distance. As I mentioned above (in my "I don't know how2spoiler"), there is also the opportunity to extend the boundaries of the formula beyond the edges of the screen, and we have that in the geogebra graphs posted earlier, extended to infinity, it's the purple line. Just as you have encouraged the use of the term "zoom ratio" for 0%MM, I have encouraged the use of the term "angle ratio" for 100% vertical. After all, the result is exactly the same as the ratio of the FOV angles.

In previous posts ITT, I've looked at how we actually judge the size of, or distance off centre to, an object/target - in other words the distance between two points. We do this by means of angles and depth perception, which makes a lot of sense, I mean, how else could we possibly do it? We have a rotating eyeball and we can measure the rotation of it in order to ascertain the angle between two points, and we have the effects of perspective to indicate depth, so we can do that trigonometry and our brain does it for us instantly. However, as we zoom the image on screen, we alter the perspective effects, thus altering our perception of depth, thus our determination of the distances between points, and that is exactly why we feel a need for a change in sensitivity. So, if our perception of these things is based on angles, there's a strong argument for using only angles to calculate sensitivity. 0% and 100% vertical 'monitor matching' both do this, in their respective guises as zoom ratio and angle ratio.

So, having excluded the validity of monitor matching, why might we consider one or the other, or neither, of zoom or angle ratios? Because for all the math in the world that might be super correct to a machine, we are human animals with brains that lie to us, largely because our monitor is lying to us. We all know that zoom ratio works on paper and we have all felt that it feels wrong in reality. Likewise simply dividing the FOVs. Neither of them actually work because we are always distorting the image projected onto the screen, thus always distorting the angles which our brains use to determine the required amount of movement.

Our perception of the projected image is a real thing and we can't just ignore it for the sake of pure mathematics.

I don't even think we need to worry about distortion anyway. If we practice enough, we will eventually develop muscle memory for every FOV that we use. It's the price we pay for wanting to zoom in.

Also, I already tried to find the "most optimal method", but I abandoned that idea, because if you aren't maintaining the same match point then you are no longer abiding by the gear ratio principle - the entire concept behind how we sync sensitivity.

 

I'm all for changing monitor match to angle match too. That's what I've been talking about in my forum post with regard to getting screen distance and sitting distance added to the calculator, since they definitely affect perceived sensitivity.

I made some diagrams, demonstrating that if you match the physical monitor distance to the same visual angle, then you can use exactly the same sensitivity.

Or another way to look at it is if different in-game FOVs were matched with their real life visual angle then we wouldn't need to convert sensitivity either. Only problem is that we can't move the monitor backwards and forwards. Not that we'd want to though - we are zooming in for a reason.

Sitting distance affects perceived size and it's the very reason why we have to convert sensitivity in the first place.

Link to comment
On 7/21/2018 at 10:19 PM, CaptaPraelium said:

This will be the indefinite integral of f(x) = atan(tan(fova/2)x)/atan(tan(fovb/2)x) with limits 0 and 1. The inverse of that function will provide us with the MOST correct monitor match percentage for the given FOVs. 

Sorry, I'm very confused. You say indefinite integral but then say it has limits - is that just a typo? The inverse of which function? Do you need to find the inverse of f(x) and then integrate with those limits or are you trying to find the indefinite integral and then find the inverse of that function? Or are you just trying to find the definite integral, and then do 1/result? 

On 7/24/2018 at 7:43 PM, CaptaPraelium said:

f(x) = (tan^(-1)(tan(c) x))/(tan^(-1)(tan(C) x))

d/(dx)(f(x) = (tan^(-1)(tan(c) x))/(tan^(-1)(tan(C) x))) = ((tan(c) tan^(-1)(x tan(C)))/(x^2 tan^2(c) + 1) - (tan(C) tan^(-1)(x tan(c)))/(x^2 tan^2(C) + 1))/tan^(-1)(x tan(C))^2


Just my luck, it won't integrate it for me (exceeded computation time) so I have to do it manually

 

This seems to contradict what you were saying in the first quote.

Are you just trying to integrate  ((tan(c) tan^(-1)(x tan(C)))/(x^2 tan^2(c) + 1) - (tan(C) tan^(-1)(x tan(c)))/(x^2 tan^2(C) + 1))/tan^(-1)(x tan(C))^2 ? 

Is that result from the mean value theory? Doesn't make sense to me, you still have x in the result. Also doesn't make sense if that's not from MVT and it's from taking the derivative directly - for one, doesn't look like the correct derivative, and secondly, in that case integrating wouldn't accomplish anything - it would just return the original function and you seem to want to integrate the result. 

On 7/24/2018 at 8:44 PM, CaptaPraelium said:

I just don't have a formula to get our value for c

1

Wouldn't you just set the MVT result equal to the derivative of f(x), and solve for x? In which case, what and why are you integrating? Is the final goal to get f(c) once you get c? As in, just plug c in for x in h(x) from the geogebra page?

Would like to try to help with the calculations and I've put most of your equations into mathcad, but I'm totally lost as to what you're trying to do.

Link to comment
10 hours ago, Drimzi said:

I'm going to copy paste an explanation someone gave in a Discord on why a screen distance match is flawed, and why zoom ratio is the only correct way to convert sensitivity. He absolutely despises this website, so will probably never come here to explain for himself.
 

 

You lost me on this post; although I have engineering background, visualizing 3D space is not my forte. I would appreciate it if you can give a more easy-to-visualize explanation. Specifically the relation between mouse movement and the parabola/hyperbola concept.

Link to comment
51 minutes ago, sidtai817 said:

the relation between mouse movement and the parabola/hyperbola concept.

Where red represents a hyperbola 
yellow - parabola
green - eclipse
white - circle

As you move downwards/ upwards in the game world, so does the lines curve creating  parabolas/ eclipses or even  circles.

Hope I'm not mistaken  

projections_fed_rect_150.jpg

Edited by MuntyYy
Link to comment
13 hours ago, Drimzi said:

I'm going to copy paste an explanation someone gave in a Discord on why a screen distance match is flawed, and why zoom ratio is the only correct way to convert sensitivity. He absolutely despises this website, so will probably never come here to explain for himself.
 

 

So, what was his solution, is it actually implemented in any games? How could he hate this site, it's the only one on the internet that discusses this topic in depth. If he cares so much about it, why is he not contributing? Is 0% not essentially the same thing as "zoom ratio" if not what is?

This seemingly implies that monitor match differs between different games. I have spent a lot of timing relearning my sensitivity over the years and then you have the best players in the world who don't give a crap about any of this and just play the games. I just have a bit of a hard time believing zoom ratio instead of 0% match will actually reap dividends in my actual in-game performance and that is what we are really trying to accomplish here, correct?

Edited by Bryjoe
Link to comment
5 hours ago, Bryjoe said:

So, what was his solution, is it actually implemented in any games? How could he hate this site, it's the only one on the internet that discusses this topic in depth. If he cares so much about it, why is he not contributing? Is 0% not essentially the same thing as "zoom ratio" if not what is?

This seemingly implies that monitor match differs between different games. I have spent a lot of timing relearning my sensitivity over the years and then you have the best players in the world who don't give a crap about any of this and just play the games. I just have a bit of a hard time believing zoom ratio instead of 0% match will actually reap dividends in my actual in-game performance and that is what we are really trying to accomplish here, correct?

0% (zoom ratio) is the solution.

Link to comment
6 hours ago, MuntyYy said:

Where red represents a hyperbola 
yellow - parabola
green - eclipse
white - circle

As you move downwards/ upwards in the game world, so does the lines curve creating  parabolas/ eclipses or even  circles.

Hope I'm not mistaken  

projections_fed_rect_150.jpg

 

ohhh okay I think I get it now. So the point is that we have chosen to project vertical lines as straight instead of horizontal lines as straight (we have to pick one due to projection of a 3D space onto a 2D plane), if we are using monitor match, it applies on x-coordinates irrespective of y values. However, MM does not apply in the y-axis unless x-coordinate is 0. Therefore, choosing MM values other than 0% is going to lead to inconsistencies in y-axis when snapping to targets not aligned to the line x=0.

 

I hope I am understanding this correctly.

Link to comment
10 hours ago, sidtai817 said:

if we are using monitor match, it applies on x-coordinates irrespective of y values. However, MM does not apply in the y-axis unless x-coordinate is 0. Therefore, choosing MM values other than 0% is going to lead to inconsistencies in y-axis when snapping to targets not aligned to the line x=0.

This ain't true. MM applies in both axis ( same as all the other conversions ). Yaw applies only horizontally, whilst pitch - vertically. 00% however is independent of aspect ratio, screen dimensions, resolution and so on.

00% - zoom ratio
100% - gear ratio

Edited by MuntyYy
Link to comment

What he said was true. The monitor match rule is not going to work as expected for yaw unless it is purely horizontal, which is dependent on the pitch. Pitch is independent, the rule always applies here. If you matched 10cm to 100%, then looked straight up and then tried to do a 10cm movement to reach a target at the horizontal edge of the screen, it's not going to work (although I haven't checked if 10cm with a specific trajectory would work).

As for this 'gear ratio', anything can be a gear ratio. 100% is the 'gear ratio' for the horizontal angle. 56.25% is the 'gear ratio' for the vertical angle. 0% is the 'gear ratio' for the focal length.

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...