• In total there are 15 users online :: 2 registered, 0 hidden and 13 guests (based on users active over the past 60 minutes)
    Most users ever online was 1086 on Mon Jul 01, 2024 9:03 am

A question about depth perception

Engage in discussions encompassing themes like cosmology, human evolution, genetic engineering, earth science, climate change, artificial intelligence, psychology, and beyond in this forum.
Forum rules
Do not promote books in this forum. Instead, promote your books in either Authors: Tell us about your FICTION book! or Authors: Tell us about your NON-FICTION book!.

All other Community Rules apply in this and all other forums.
MadArchitect

1E - BANNED
The Pope of Literature
Posts: 2553
Joined: Sun Nov 14, 2004 4:24 am
19
Location: decentralized

A question about depth perception

Unread post

I'm reading through an article about pioneering phtographer Roger Fenton, and a thought occurred to me, a question really, that I thought someone on BookTalk might know how to answer.

Our ability to see in depth is, as I understand it, made possible by having two visual receptors (ie. eyes) taking in any given view from two slightly different angles. The information taken from each is then coordinated by our brains and represented as an image in three dimensions. If you look at any thing with only one eye, you'll automatically make some adjustments to perceive how it might extend in depth, but you essentially lose true depth perception. So my question is this: Does our perception of depth flatten out at the periphery of our vision?

What I mean is, if depth perception is achieved through the coordination of information from both eyes, and if the information gathered at either side of our peripheral vision is more solely the work of a single eye, then shouldn't that information (at the periphery) be less stereoscopic? After all, if you can see out of the corner of your eye a person standing to the far left, your right eye is picking up little if any indication of that person. If you close your left eye, and can't see that person at all, then it stands to reason that, even with boh eyes open, you're only getting a monoscopic image of them.

To be honest, just by looking, I can't tell whether or not my own depth perception does flatten out as I suspect it must. So I'm hoping that someone here will have some more concrete information on the subject. It's not particularly urgent that I know, just sheer curiosity. Thanks in advance!
40 Helens
Getting Comfortable
Posts: 14
Joined: Sun Mar 09, 2008 11:05 pm
16

A new person dredging up old threads

Unread post

I'm far from an expert, but I've been interested in vision since reading a book (Phantoms in the Brain by Vilayanur Ramachandran) that broke it down into many different processes, and I often find myself conducting experiments with my own peripheral vision.

I believe light has to fall on the retina's fovea in order for our brains to fully assemble shapes. I think I can detect color (although it may just be the contrast between light and dark objects), and I can perceive the size of the different contrasting chunks around me, but I can't pin down any shapes. (I'm experimenting with a globe on my desk, as I write this.) Thus, I suspect our peripheral vision is, somehow, less than than flat; it might not give us enough information to form even a 2 dimensional representation of the objects around us.

I find it interesting that peripheral vision gives us information in a way that feels more like intuiting than seeing. If the vague forms it picks out resemble an unexpected person, or if those forms suddenly change their relative sizes (indicating movement) we tend to startle. I've also noticed, after endless hours of "hide and seek" with my kids, that people are more likely to notice you in their peripheral vision, if your eyes are directed at them, than if they're cast downward. It's like we have a special process dedicated to spotting "watching eyes", which would make sense from an evolutionary standpoint.

Helen
User avatar
President Camacho

1F - BRONZE CONTRIBUTOR
I Should Be Bronzed
Posts: 1655
Joined: Sat Apr 12, 2008 1:44 pm
16
Location: Hampton, Ga
Has thanked: 246 times
Been thanked: 314 times

Unread post

There is also a range at which people can see at when looking off into space, right? I think someone that has a hard time concentrating on infinite may make it even harder for them (or easier, I dunno) to see peripheral objects. I think that anything that casts a significant shadow can betray a 3-d image. It does seem that the ability to distinguish 3-d objects is limited in me. I can reconcile the image I see in my peripheral with what I assume that object is (3-d) because of light reflected off of it or a shadow or just because I know what the object is. How would I know it was 3-D without these very obvious clues is hard to tell. I'm sure this relates to camouflage type technology where you can be looking directly at something and not recognize it as a separate 3-D image.
User avatar
Saffron

1F - BRONZE CONTRIBUTOR
I can has reading?
Posts: 2954
Joined: Tue Apr 01, 2008 8:37 pm
16
Location: Randolph, VT
Has thanked: 474 times
Been thanked: 399 times
United States of America

Unread post

Vision is fascinating. My daughter has Strabismus or cross eyes. If this condition is not corrected it will cause a person to loose their depth perception.
Children with strabismus may initially have double vision. This occurs because of the misalignment of the two eyes in relation to one another. In an attempt to avoid double vision, the brain will eventually disregard the image of one eye (called suppression).
Our doctor explained and I've read that what we see is the result of learning. Our brains need to learn to interpret the information that is being received by the optic nerve. Our brains create much of what we "see". There are actually holes in the information being taken in by the eyes. Our brains fill in the missing information, so that we always see a complete picture, if you will. I think the same must be true of peripheral vision. It makes sense that if we were seeing what we ought to see given the way stereoscopics work, the stuff on the peripheral would be flatter. I think our brains must compensate based on past information.

Here's another example of how our eyes and brains work together in order for us to see. I read about an experiment where volunteers wore glasses that made everything look upside down. At first everyone stumbled around. After a few hours the brain made the adjustment and the folks with the funny glasses were able to function normally. After about 24 hours in the glasses the test subjects were asked to remove the glasses. Guess what? They stumbled around again for a few hours until their brains readjusted to the right-side up world.
Post Reply

Return to “Science & Technology”