Jump to content

CaptaPraelium

Premium Members
  • Posts

    219
  • Joined

  • Days Won

    5

Everything posted by CaptaPraelium

  1. ...and as usual I find myself asking, "Why?"...... I mean. we know that in game, that robot seems "further" away, but if this were real life and I switched from irons to a zoomed optic on a rifle, I wouldn't feel like it was further at all. It'd just feel like ~12 degrees no matter. And, we know that the target is ~12 degrees from the crosshair in both images, but it sure LOOKS like further in the second image. Muh damn feels are totally lying to me Plus, as you've mentioned, when left to adjust the sens according to their feelz, the community has gravitated toward a certain number. Like you say, it suggests something. But what? Why 38? What are the VFOVs on those two images, skwuruhl?
  2. Skwuruhl is the most knowledgeable in the field of mathematics and firmly believes that the mathematically correct ratio is the correct method to scale sensitivity. You are the most experienced in the field of pewpew and have constructed outstanding formula based on your experience. I am a programmer and am following a programmatic method to find a formula, by working my way backward from what we know and experience, to discover what we want to know and experience. You're quite right that, with experience, and 'getting used to it' (ie, practice), ANY sensitivity will do just fine. Heck, they didn't have any of these configuration options in the 80's and I did just fine blasting pixels at 640x480x16 colours I bet we've all used a cruddy joystick to land on tiny platforms after jumping over some dangerous object with perfect timing. Our brains will take care of it eventually, as we gitgud. It's also true that the optimal process will always be optimal; if we're going to get used to a scaling method, it makes sense to use that which is correct, and as it stands, that's zoom ratio. That being said, one can't help but wonder whether the reason why zoom ratio feels wrong is just bias (ie, we 'got used to' something else) or if there's more to it. Bias is incredibly powerful and I will not be at all surprised if that turns out to be all there is to it. Still, I am not satisfied with making any assumptions, be they that bias is the critical factor here or that it is not. I have a strong inclination towards skwuruhl's approach of 'math don't lie', but after spending 6 months in an attempt to "suck it up" and "get used to" using 0%MM aka zoom ratio, in hopes of overcoming my bias, I still felt that there was something else at play.... and I'm just that type of guy who has to ask, "Why?". I'm certainly not an expert in human visual perception, but I've spent a lot of time over the past few months learning as much as I could... and what I've learned has told me that there is no doubt that we all perceive the image differently, as a result of the distortion introduced by the projection. In my most recent posts, I have attempted to account for that distortion. As we know, zoom ratio is the mathematically correct method so scale between zoom levels, but these are not solid rectangles we're scaling, they are filled with an image. That image, is what gives us the perception that our sensitivity should scale at all. As valve put it, " an optically correct perspective can be obtained by matching the Camera's angle of view to the angle between the Player's eye and the edges of the image on his screen. Obviously this angle varies according to the actual size of the Player's screen and how far away from the screen he is actually sitting " It is a common analogy, to consider that the screen is a 'window' through which we view the game world, and where our angle of view of the screen matches the FOV in game, that analogy holds true... but once those angles differ, the analogy weakens. A more appropriate analogy, would be that the 'window' formed by our screen, is in fact a 'lens'. If we consider it as such, zoom ratio is the correct scaling method for us to use....but there's more to consider. If we were looking at the real world through such a 'lens', like a telescope with a big rectangular eyepiece, using zoom ratio would work just fine, because we would just be zooming an optically correct image into another optically correct image. However, in game, we are not doing this. We are zooming a distorted image. Consider the design of a zoom lens: The three lenses on the left are performing the actual zooming by adjusting the focal length, and the lens on the right focuses that image onto the eye (or film or whatever). For our purposes, zoom ratio describes the result of the three zoom system lenses, however, we do not have an analogous formula for the focussing lens on the right. At present, that is fixed, controlled by our distance from the monitor. To use another image to explain it.... First, the description of the image: "A varifocal lens. Left image is at 2.8 mm, in focus. Middle image is at 12 mm with the focus left alone from 2.8 mm. Right image is at 12 mm refocused. The close knob is focal length and the far knob is focus. " Zoom ratio, describes the difference between the left and centre images. We do not have a formula to describe the difference between the centre and the right image. ....I'm working on it
  3. Yeh that's what I was thinking as I typed it. Obviously that won't have the desired effect I've got an appointment to do so I'll give it some more thought in a few hours
  4. Yes, we can see that they are the same for the purpose of measuring the zoom, by your overwatch image I quoted right at the top of this thread, but the point of this thread, is that the distortion of the image has a perceptual effect on our brain (for an unrelated example high FOV makes you seem like you are running really fast) and accordingly there is more to the way we perceive the shift in FOV, than just the zoom amount. This is why it is important to understand the distortion fully, so that we can analyse the effects that such distortion would have on our perception. The reason I asked if the axes are treated the same, is that all of the formula I had found for rectilinear projection, imply stretching in the X axis which is greater than the stretching in the Y axis. As you say, this has no bearing on zoom amount, because as I said, the stretching is constant anyway (such as you've mentioned the same formula for zoom ratio will apply, even for the diagonal fov).... But, you have said that the stretching is in fact equal on both axes, and you are the first I have heard suggest this. We have both said in this thread, that the effect of having more distortion visible is related to the aspect ratio of the monitor, but there's more to it than that, as visible in this image where the aspect is vertical: You are taking that a step further and saying that there is no additional stretching in the X axis, when compared to the Y axis, regardless of what is visible. I asked if we have any evidence of this (because like you, I am attempting to avoid any pseudoscience here) and the funny thing is, I think I may have already posted it. As I've discussed above, the majority of literature available regarding rectilinear projection, pertains either to cartography or photography.... and it seems you've brought me to realise why this is - because the projection method we are using on computers, is not generally referred to as 'rectilinear projection'. Yes, it is "rectilinear" projection, in the sense that straight lines in the 3D world are represented as straight line in the 2D projection, but in the computer graphics field, the term I'm seeing more widely applied, is 'perspective projection'. This naming is to distinguish what we see in-game, where more distant objects appear smaller (as in real life), thus creating a sense of depth to the image; from 'parallel projection', which is used in computers for such tasks as CAD modelling, where the length of the lines must also remain intact, and this effect is achieved by means of mapping the 3D coordinates via parallel lines onto the 2D projection. As in, ALL of these are rectilinear projection: Only the first one applies to us. The distinguishing features here, are not only that straight lines are projected as straight (ie, it is rectilinear), but that parallel lines are not projected as parallel, rather, they represent a vanishing point (ie, perspective). So, now that I have the correct terminology, it's been much easier to find information regarding the nature of the projection....Well, I say 'find', but I should say 'found', because I already came across it. Some of it is even pasted above, and some I left out as it seemed to be irrelevant at the time. Now I know better, let's sort that out with some link spam: http://www.significant-bits.com/a-laymans-guide-to-projection-in-videogames/ https://en.wikipedia.org/wiki/3D_projection#Perspective_projection https://en.wikipedia.org/wiki/Camera_matrix https://en.wikipedia.org/wiki/Perspective_(graphical)#Three-point_perspective https://en.wikipedia.org/wiki/Graphical_projection#Perspective_projection https://www.math.utah.edu/~treiberg/Perspect/Perspect.htm https://www.cse.unr.edu/~bebis/CS791E/Notes/PerspectiveProjection.pdf https://en.wikipedia.org/wiki/3D_computer_graphics https://en.wikipedia.org/wiki/3D_rendering https://www.youtube.com/results?search_query=perspective+projection+in+computer+graphics just in case anyone prefers to learn visually, there are plenty of videos about this. TL;DR, the X and Y axes are treated the same way. All of this is actually good news for me as not only does it answer the question as to why the image behaves as it does when the camera is rotated around the Z axis (specifically, is the X axis fixed to the camera or the world horizon - answer: it doesn't matter), but it also negates any concern of unequal stretching between the X and Y axes. The illusory effect of forced perspective still comes into play here, but we are now down to a simple factor to consider - perceived visual angle, which is determined not only by the FOV of our image, but as skwuhurl has touched on, focal length... Or in the case of a computer monitor, We're talking about size of, and distance from, the screen. Valve touch upon this here: https://developer.valvesoftware.com/wiki/Field_of_View#Optimization and I have briefly touched on the effect of angle of view, perceived visual angle, etc, in the posts above. Essentially what all of this implies, is that in addition to considering the ratios between two zoom levels, aka fields of view on screen, we should consider that those zoom levels are in fact ratios of another field of view - that of the screen itself. There is an easy way to demonstrate this, so I'll cough up drimzi's image again: I don't know what FOV this image is (Lil help @Drimzi?) but suffice to say, it's big. The wall at the end looks tiny, and the signs on the left wall look huge. Right? Well...... Open this image fullscreen, and while you stay looking at the wall at the back, get your face right up against the screen. I mean, if your nose touches the screen, you're doing it right. (OK maybe not quite that far. but as close as possible) Notice the green sign doesn't look so distorted any more? That's because of the angle at which we're viewing it. I imagine the formula which takes this into account, would essentially be a 'screen to zoom ratio, to screen to zoom ratio, ratio'. And if that name didn't make you giggle you're taking this all far too seriously Basically what I have in mind is: Take screen dimensions (pixels) and size (inches/cm diagonal) Take distance from screen to eyes Do basic trig to find FOV of screen Use existing zoom ratio formula ScreenZoomRatioa = tan(FOVa/2)/tan(ScreenFOV/2) to find ratio between screen and first FOV (say, hipfire) Use same formula ScreenZoomRatiob = tan(FOVb/2)/tan(ScreenFOV/2) to find ratio between screen and second FOV (say, 4x zoom) Then simple ScreenZoomRatioa/ScreenZoomRatiob = PerceivedZoomRatio I'm throwing these ideas up as I type so there are likely issues and I'm sure that can be simplified... But it's getting late so I gotta wrap it up for now. I seem to remember @DNAMTE or @potato psoas were already working on a formula which took distance from screen into account. I don't want to reinvent the wheel or steal anyone's thunder here. If you guys have something going on this I'd love to hear about it Likewise @Skwuruhl I know you speak this stuff like a second language so if you can whip something up that would probably save me a lot of time. I have a feeling that an 'all day job' for me is a '5 minute break' thing for you hehe Otherwise I might go ahead and cobble something together sometime in the next couple of days.
  5. But the ratio (divided by cos(yaw)) is constant. Every formula I've ever come across,. shows that the X axis is treated differently to the Y axis. Do we have any evidence that X and Y axes are treated the same? BTW, typos are very likely. I'm doing the formula in libreoffice and constantly converting radians to degrees and back.... It's annoying AF.
  6. Reserved (for continuation of the above... Gonna wait and see what others have to say first!)
  7. There is much discussion amongst gamers, as to what is the "best FOV". Lower FOV makes bigger targets, a more easily visible centre of the screen, shows more detail, it's great. But Higher FOV, creates peripheral vision, reduces apparent recoil, gives the impression of faster movement. It's great... There are many pros and cons. Accordingly, the "best FOV" is generally considered to be subjective.... In other words, it depends on the player. I do not subscribe to this logic, and I am making this thread to discuss the objectively optimal FOV. For every increase or decrease in FOV, there is a tradeoff between the pros and cons, of which I gave some examples. This tradeoff is not magic, it's math. We can crunch the numbers to find out where the distortion of the image, becomes excessively lossy, when compared to the additional information on our monitor. This is not a thread where "muh feels" have any place. If you have an FOV which you feel is best, and it works for you, I am not trying to change your mind Use it! But, if you want to pitch in and crunch numbers to find the sweet spot where more image meets less quality, you're in the right place. I've started some discussion over here: Which aims to define the nature of the distortion in the image, not only along the axes where x=0 and y=0, but across the whole image. The formula which arise from that thread, will be directly applied here. I'll be focussing on that thread first, but I feel like the content in that thread, leads directly to this one, so, here it is! Annnnnd go!
  8. OK badguys successfully pewpew'd, time to get my nerd on again, I guess. As usual, let's start out with an image as food for thought. This is an image of the US Federal Reserve building, taken from the http://www.tawbaware.com/projections.htm#proj_rect website. It is 150degrees horizontal FOV, and displayed in a cylindrical projection. As a brief aside, this image shows us why rectilinear projections are good in game, and why we want the distortion in our image. Consider two targets on the roof, one in the middle of the roof, and one at the far left side. Now consider two targets on the bench where the red line is, again at the centre and on the far left side. The movement from the centre target to the side target, in both cases, is the same. But the guys on the roof, it would appear that you'd have to use a different amount of vertical mouse movement since the roof is curved. Keeping straight lines, straight, is important to us when we're moving from point A to point B in a straight line....and keeping straight lines, straight, is what rectilinear projection does. So, let's see the same image, in rectilinear projection: Note the excellent visualisation of the perception effects discussed above. Note the apparent change in distance between the trees and the building, between the camera and the building and the camera and the trees, etc etc.... Same 150degrees HFOV. Same Width. But you can notice that the divisions of the white lines (looks like they're every 10 degrees) are the same count. But there is some distortion here. They are not only stretched out at the edges, but compressed at the centre. Same roughly (based on the white lines) 80degrees VFOV.. But we can instantly see that the height is much, much less.... and here's the important part - notice that the VFOV is the same... ON THE BLUE LINE. Now look at the edges of the image. What's the VFOV there? Maybe 30 degrees (going by those white lines...we can math that out in a bit) So...what. Let's consider the formula provided by tawbaware, for rectilinear projection: (and I'm not doing greek letters any more you nerds) x=tan(yaw) y=tan(pitch)/cos(yaw) Let us keep in mind, that for our considerations, we are measuring angles from the centre of the image, not from the bottom left corner. So, our formula for calculating VFOV is so: VFOV = 2*arctan(tan(HFOV/2)*Height/Width) We use the 2* and the /2, to account for measuring from the centre (half way into the image) And we take our screen aspect ratio height and width measurements, to keep everything in the correct aspect ratio. Cool. So, to oversimplify by ignoring these considerations, we have VFOV=tan(HFOV). So, does that match with the projection formula? Well, it does. IF we are assuming that we are only measuring at the centre of the screen, where the blue line is, where yaw=0: y=tan(pitch)/cos(yaw) = y=tan(pitch)/cos(0) = y=tan(pitch)/1 = y=tan(pitch) But, what about when we measure at the EDGE of the screen? For that, we need to use our formula for calculating HFOV: HFOV = 2*arctan(tan(VFOV/2)*Width/Height) Now we need the cosine of that, but let's remember we are measuring from the centre, so it is: 2*arccos(cos(HFOV/2)) And now we can punch that into our formula for calculating VFOV: EdgeVFOV= 2*arctan(tan(HFOV/2)*Height/Width) / 2*arccos(cos(HFOV/2)) We can simplify here because we don't need those 2*/2*: EdgeVFOV= arctan(tan(HFOV/2)*Height/Width) / arccos(cos(HFOV/2)) By the way, since I said we would math it out, that image is 804x178 and 150HFOV. Using the formula I have here, that works out to a HFOV of 78.13, and an EdgeHFOV of 30.22. Pretty close guess! @Skwuruhl Please feel free to pitch in ... Please be gentle, this is not exactly my field of expertise. I could use your help! Regardless of any screwups I've made in the math(which are all too likely, it's been a long time since I learned or used any of this kind of math), you can see my point here. 0%MM takes into consideration the distortion along the axes of our image, which are those used to define the limits of the FOV on those axes, but it does not take into account the distortion as we move away from the axes, specifically as we move away from the y axis, and as discussed above, that distortion, is the reason why we perceive the image differently. I'm going to take a break here, and allow people to double check my math, before I move forward in my concept with more math based upon this post. In my last post, I ended by saying: And I'm obviously not done answering those questions. This post has mostly been about answering the last of those questions, "How do we measure the distortion". What we have observed here, is that the horizontal distortion is constant at tan(yaw) regardless of vertical angles, but mostly, this post has been about measuring the distortion in the vertical plane, and how yaw effects that distortion. Next up, we can look into how the now-defined distortion is effected by aspect ratio. This will tell us, what we should be using to calculate sensitivity, ie the first question "To what extent do we measure that FOV? The monitor? a square? a circle? or perhaps a diagonal?".... There's much discussion about that over in the viewspeed reworked thread, so I hope that this will tell us outright, which is the best way.
  9. Horizontal FOV is of questionable use in formula because it means, for the same movement on the same image, in the centre of a 16:9 screen, and a 21:9 screen (or a 2:2000 screen for that matter) you wind up with a different sensitivity (EG 1cm movement from centre at 80VFOV... Should be the same mouse movement regardless of aspect ratio). People tend to think in terms of horizontal FOV because most of our aim movement is horizontal because maps reflect the world we live in where vertical movement is less prevalent and vertical FOV is lower (as in, between our cheeks and brows). However, this is contrary to good formula.
  10. Crickets... Oh well, I'll go deeper. That should make things much worse. XD First, more food for thought. Do the lines appear to change in length? Some will say it looks like they are, some will say they don't. Last post, I wrapped up by talking about the amount of distortion at our two FOV's... In other threads, up until this day, there has been debate about which FOV should be the basis for the formula - vertical or horizontal. Over in the https://www.mouse-sensitivity.com/forum/topic/720-viewspeed-reworked/ thread, it was pointed out that the projection has the effect of using a set VFOV, and then 'filling out' the horizontal plane, to fill our monitor. This could imply that we should be using the VFOV, since HFOV varies with monitor aspect ratio. This makes sense, seeing as movement from one point to another that is within the VFOV limits, even in the horizontal plane of the world, should require the same amount of movement of the mouse, regardless of how much peripheral vision extends to the sides of the monitor - in other words, given the same VFOV setting, aiming 1cm to the left should feel the same no matter if we have a 16:9 or a 21:9 or a 32:9 monitor ... But that assumes a positive aspect ratio (monitor wider than tall). What if we consider the centre monitor in one of those cool triple-vertical-monitor setups? Better yet, what if we are in an aircraft/spacecraft, and roll it to the side? Our monitor vertical FOV is now in the horizontal plane of the game world, and vice versa, and the amount of distortion in those planes does not vary. Only the visible amount of distortion in the projection to our monitor varies. So, we can just use the vertical FOV again, since it is always the angle which provides the constant amount of distortion no matter the aspect of our screen. However, if we ignore the distortion in the horizontal aspect of the monitor, we are ignoring the very effects upon our perception which render 0%MM ineffective. Consider the aforementioned effect of the Odessa Steps. This same effect would be just as apparent, if we turn our head to the sides. Then again, if we use the horizontal distortion as a basis, we are now including distortion which is not always visible, and in the most common case of a positive aspect ratio monitor positioned horizontally (read: not a vertical screen or a rolled-sideways spacecraft), this distortion is usually not visible. Indeed, it turns out that this is different for different people. Current studies suggest that the difference is caused by ...wait for it... eye pigmentation. Yep, the colour of your eyes makes a difference here. Does this mean we need some kind of coefficient to allow for blue vs brown vs green eyes? Well, fortunately not. See, the different eye pigmentation does appear to effect our perception of the distortion (see https://en.wikipedia.org/wiki/Müller-Lyer_illusion for more info), and other effects have been cited as the reason for the illusion, but all agree that the perception is uniform in manner. This effect does not vary between vertical and horizontal or any diagonal inbetween. This can be seen in the cool animated image at the top of this post. Whether you see growing/shrinking lines, you'll see the same on all of them. Why is this important? Because it tells us that we do not need to account for the differences in individual perception of the distortion. We need only account for the differences between the distortion in each axis. So, once again, analysis of the sciences tells us that the path to an answer, lies in a ratio between the distortion in each FOV. But to what extent do we measure that FOV? The monitor? A square? the vertical square or the horizontal? Maybe a circle?........ And how do we measure the distortion, since it's not the same all the way to the edges but increases as we diverge from the centre? But that's a topic for another post on another day. I have badguys to pewpew
  11. On rectilinear projection, distortion, and forced perspective: So, we know already that a perfect mathematical scaling between FOVs is '0% Monitor Matching': ZoomLevel = tan(HipFOV/2)/tan(ZoomFOV/2) HipFOV = 2*(arctan(ZoomLevel*(tan(ZoomFOV/2)))) ZoomFOV = 2*(arctan(tan(HipFOV/2)/ZoomLevel)) So, let's think about what is actually contained within those images, which influences our perception of the game world, and causes our mouse to require a different scaling algorithm. We can start by looking at the formula for the projection: x=tan(λ) y=tan(Φ)/cos(λ) There are two obvious facets to this. One, is the distortion by the tangent of the angle. The other, is the reduced distortion on the vertical axis, caused by the division by the cosine of the horizontal angle. It is often said that rectilinear projection results in horizontal stretching, but we can see that in fact, it is that both axes are stretched, just, the vertical axis less so. Whatever - the result is that the horizontal axis is more stretched. But, this isn't a stretch that happens evenly across the entire monitor. The stretching distortion will always be at it's minimum at the centre of our screen, and increase from there outward. This is why attempts to treat the game as we would a 2D space, can never work, because a 2D space like our desktop, is evenly stretched from edge to edge. This is why we have n% monitor matching, and why battlefield uses a coefficient, and why CSGO also uses a (fixed at 133% aka 4:3 aspect afaik) coefficient - these are attempts to make such an approach, function - and they will succeed, but ONLY at that exact percentage/coefficient value. So, we have two considerations here which will effect our perception - Increased horizontal stretching compared to the vertical, and increased stretching on both axes, as we diverge from the centre. I will touch on the latter effect only briefly for now - and I'm not shy to say that is because I really don't know how our brains perceive this. The only information I have come across thus far, was this comment by @TheNoobPolice over on the Battlefield forums: https://forums.battlefield.com/en-us/discussion/comment/532023/#Comment_532023 I'm confident that further study of the effect of distortion on our perception will lead me to more solid conclusions, but any further input on this effect would be greatly appreciated. However, much is known about the former effect, the increased stretching in the horizontal plane. This results in an effect known as "forced perspective". You can read all about it from the links in the 'library' post above this, but the basics of it are that the objects appearing wider than they do long, causes our brains to perceive them as being further away. I think it's worth having a read of the wikipedia page on this effect: https://en.wikipedia.org/wiki/Forced_perspective Let's take a few images from that page to demonstrate how this would effect us in-game. Let's imagine that we are standing on the right side of this room, looking along the pews, toward the left side. Now, we want to move our aimpoint to the back of the dais there and look at the painting of Jesus. We will see that *angular distance* (remember this term!), let's say the dais is, what, a few metres deep? So we turn to our right accordingly, and we are now looking at our saviour..... Or are we? Hold on... that's not a few metres deep. It's an illusion, it's maybe ONE metre deep, if that! But it looked MUCH deeper than that, and we turned accordingly. The illusion has caused us to turn too far. This sensitivity formula is too fast! "MUH FEELS!" XD Let's do it again: What's the distance from the big golden cross, to the red-covered lectern on the left there? How about now? MUH FEEEELZ! Now, these examples may not be entirely analogous to the view we have of our game world on our screens, but I'm sure that it is apparent to you by now, that the angles we see, effect our perception of distance, and conversely, the distances we see, effect our perception of the angles. This is probably a good moment to point you toward https://en.wikipedia.org/wiki/Angular_diameter and https://en.wikipedia.org/wiki/Perceived_visual_angle Fortunately for us, we are not at the whim of a clever church architect playing tricks with our mind. We know exactly how far the distances on screen have changed - it is the ratio between the distortion at the two FOVs. Accordingly, we can write formula which allow for this effect. Please, pitch in guys. Share your thoughts, share your fomulas
  12. Library http://www.tawbaware.com/projections.htm http://www.tawbaware.com/ptassembler_mod_rect_projections.pdf http://mathworld.wolfram.com/GnomonicProjection.html - So why is this formula different to the above? Because we're not doing a map projection of the globe. Turns out that the same terms are used both in cartography (maps) and photography - the latter of which is actually relevant to us. https://en.wikipedia.org/wiki/Rectilinear_lens - Note the image here, which shows us why e don't want to avoid the image distortion present in rectilinear projection. That fish-eye effect (known as 'barrel distortion') means that straight lines are no longer straight. This would be very bad in a shooter for example. Speaking for photography and its relevance to us, and distortion... https://en.wikipedia.org/wiki/Distortion_(optics) - but why? Let's look at how the machine creates the image... This can get pretty nerdy and isn't mandatory for understanding of this subject so feel free to skip the computer-stuff. It's the brain stuff that's important here. https://msdn.microsoft.com/en-us/library/windows/desktop/bb147302(v=vs.85).aspx?f=255&MSPPError=-2147217396 https://en.wikipedia.org/wiki/3D_rendering https://en.wikipedia.org/wiki/3D_projection https://en.wikipedia.org/wiki/Field_of_view_in_video_games https://developer.valvesoftware.com/wiki/Field_of_View Of course I should show these excellent examples provided by forum members (guys please take credit here!) And now we get into the real meat and potatoes of this thread, how this effects our perception of the image. Food for thought, let's start with Escher. https://en.wikipedia.org/wiki/Optical_illusion https://en.wikipedia.org/wiki/Perspective_(graphical) http://artsygamer.com/fov-in-games/ https://en.wikipedia.org/wiki/Angle_of_view#Cinematography_and_video_gaming https://en.wikipedia.org/wiki/Focal_length https://en.wikipedia.org/wiki/Normal_lens#Perspective_effects_of_short_or_long_focal-length_lenses https://en.wikipedia.org/wiki/Perspective_distortion_(photography) https://en.wikipedia.org/wiki/Angular_diameter https://en.wikipedia.org/wiki/Perceived_visual_angle https://en.wikipedia.org/wiki/Depth_perception And really, if you only read ONE of these links, this is the one I feel would be most pertinent: https://en.wikipedia.org/wiki/Forced_perspective Look at the CSGO image above, look at the Odessa Steps....
  13. First, some background: We set our sensitivity such that a certain amount of mouse movement will result in a certain amount of soldier movement. In it's simplest form, we have our hipfire at some cm/360. If this were real life, when we use a magnified optic, that means that our movement is magnified accordingly. This is why shooters take such measures as controlling their breathing and heart rate. In game, where magnification amounts to a reduced FOV, this is very unwieldly. Having the same cm/360 at all levels of zoom, just doesn't work. As we zoom in, the sensitivity becomes impractically high - it feels too fast. Muh feels. So, we can use some formula, to reduce our sensitivity in proportion with reduced FOV. The math is fairly straightforward in this case, it is what we here refer to as "0% monitor matching". So, we try this.... and it doesn't feel right either. Muh feels. So, we can try to make our 3D game respond as though it were a 2D space like our desktop. This is obviously never going to work because the 3D game world is NOT in 2D no matter how hard we try to treat it as though it is. So, sometimes, it feels right, and other times, not so much. Muh feels. Now, 'muh feels' is a term which often carries negative connotation. The implication of saying 'muh feels' is that the person is ignoring 'muh reals' - ignoring reality in favour of their subjective sensation. I want to state very early on here, that such an attitude is not the point of this thread. On the contrary, it is clear to me that there is a strong reasoning for the way that sensitivity changes across FOV changes are perceived - and this is the golden word here - Perception. These 'muh feels' are not coming from nowhere. Yes, there will be some psychological effects like placebo and memory and many many others, but I don't think that any of us could put all of our observations down to just these effects. What we do know, is that the human brain performs a great deal of work on the image captured by our eyes, to determine relevant data such as distance, angle, size of objects in the game. It should be clear to us all by now, that this image processing performed by our brains, is having an effect on the way our movement feels in-game - otherwise, 0%MM would just work. In other threads, a great deal of work has been done to find a formula which 'feels' right. Just as much of that work has validated previous formulae, I hope that this thread will do the same. However, the intention of this thread is to take a different approach to finding that 'right' formula. Rather than a process of elimination - in other words, trying a formula, and adjusting it according to 'muh feels', until it 'feels' 'right', I want to take a look at WHY it doesn't 'feel' 'right'. I believe that a more scientific approach will be beneficial as a counterpart to the scatter-gun approaches which we have used in these other threads. I also want to be clear that I am by no means an expert on this subject. Human visual perception is kinda rocket science and I warmly welcome any input on this. However, the intention here is not to simply say "it feels too slow", or "it feels too fast", but to figure out WHY. The first step in solving any problem, is to define that problem clearly (Every properly defined question contains it's own answer), and to the best of my knowledge, this has not yet been done. I've collected a fair few documents and videos as a basis for this, and (largely because I have too many tabs open in my browser) I think it's about time I posted these links here for future reference. I hope you might take the time to look these over and give it some thought, and perhaps you might even have some information you could add to the discussion - I need all the help I can get The next post in this thread will be a 'library' of links that we can reference, which I will update over time as more information becomes available, and the following post will be a brief overview of what I've been able to derive from that information, which appears to be related to our perception of the 3D games projected on our 2D displays.
  14. I'm avoiding circles because they are misleading. It seems like a good way to represent what we're doing and seeing, but actually represents something rather different... So we are kinda thinking about it wrong because we are visualising something that never happened. Sorry I'm taking such a long time to get back to you with the drawings bro. IRL has a funny way of changing plans :\ I expect that the xmas period should offer me some time off to do fun stuff like this Hope you're having a good break.
  15. Just as a follow-up to the above, the formula from tawbaware is correct. The one from wolfram is specifically for projecting the sphere onto the plane, as in, a map of the earth. It doesn't apply here, because we aren't mapping a sphere (and that's why future diagrams from me regarding viewspeed have NO circles) @potato psoas Thought you may find this interesting.
  16. Not your fault mate, I'm trying to draw this geometry in your mind with my retarded words and I'm rushing it to boot. I'll make pretty pictures tomorrow. Probably with this: https://www.geogebra.org/geometry
  17. Again, I'm not suggesting same cm/360 at different FOVs. I think I need to draw this so it's more apparent what I'm getting at. It's 1am here and I know it will take me a long time without some kind of geometry drawing app so I should do it tomorrow or I'll be up all night I'll try and find us a good tool to draw this stuff so you don't have to make mspaint do magic like that.... Wish me luck.
  18. No, same cm/360, at the same FOV, regardless of angle of the turn. If horizontal sensitivity is 2x vertical, then the same cm vertically and horizontally would be a 360 vertically and a 720 horizontally, which would certainly destroy any muscle memory for aiming
  19. potato psoas what I'm saying is that horizontal and vertical sensitivity should always be the same, because 1/360th of a full rotation is 1 degree no matter which way we rotate, and we always want the mouse to have the same counts per degree. Thinking this through, I feel like in the past we're usually focussed on the changing FOV as though the rendered world on our screen is changing. This is evident by pretty much every diagram on these forums where the FOV is represented by a circle, and we have different sized circles for different FOV. Which seems to make sense. But let's remember this image: The amount of distortion at any point in the world is actually never changing. We could imagine that everything is like the BF3 180FOV video, all the time. We just have a limited window of that exact image. The FOV setting in game controls the size of that window, and then whatever is within it is scaled up to fill our monitor.
  20. That's...mspaint.............. You're a God. Sorry I lack the skill and patience for that jazz lol
  21. Hmm I don't know about that. IMO the vertical and horizontal sens should be the same since 1 degree up and 1 degree left are the same angle of 1 degree no matter how that 1 degree is represented on screen. Consider how your suggestion would effect a player who can rotate around the Z axis (example an airplane which can roll). Now that you bring it up, "would this work in a plane?" is probably a good checklist item for any formula... Right and this is what I am trying to say about perspective. We already know that simply accounting for the change in vertical FOV is not sufficient to account for the perception effect of the distortion in the horizontal plane, so we need to look for another constant. In viewspeed v2, that constant has been the diagonal assuming 1:1. ... which to be honest made complete sense to me. Until I though it through some more. Really, it's just taking another arbitrary horizontal measurement which is the same mistake we made when assuming 4:3 and 16:9 in the past. If we think about it, the horizontal distortion is a result of the formula used for the projection and should be derived from said formula. Let us remember that at 180 VFOV (beyond which the projection fails entirely), there is NO distortion in the horizontal plane because 180 VFOV = 180 HFOV. Note that if you block off the sides of this video so that you see a square (I just held up my hands), the lines of perspective to the vanishing point in the centre are perfect 45 degree angles to the corners of your square. The distortion occurs relative to the reduction in VFOV and regardless of horizontal screen space. Dude, how are you drawing those images? A picture tells a thousand words and I'd love to be able to communicate so clearly as you can with those images.
  22. Quite right. And by that logic, your formula should work equally well for 16:9, 21:9, 1:1 ...... and 0.5:1, 0.2:1 , etc This is the perspective stuff I was banging on about earlier in the thread. I still believe that not accounting for this is our greatest oversight and probably the reason we keep hitting walls.
  23. Looks solid! Would be interesting to see this same test with two circles, one small like this and one very large. I think it's best with some kind of crosshair too, so we have a more obvious point of aim.
×
×
  • Create New...