Condor Watch Talk

Repeat Images

  • zekazoo by zekazoo scientist

    Hi condor watch team! I recently spoke with some new users from American Public University who were having trouble with the website. 2 people reported having the same images presented to them each time they classified. Has anyone else had this problem? If so how did you get past it? I'm wondering if they may just be stuck in tutorial mode...

    Posted

  • ElisabethB by ElisabethB moderator

    Hiya,

    That is a new one for me. I haven't had repeats and a quick check didn't find anyone else having that problem. What browser are they using ? If it is IE, there are bound to be issues, as IE and the Zooniverse are not the best of friends. try clearing out the cache.

    Hope this helps. If not we'll have to take it up with the tech team.

    Posted

  • wreness by wreness moderator

    I'm not sure how to verify this except to write down the ID number of every photo but I was thinking the same thing - (yay, I'm not crazy! Well, not about this anyway!)

    The last two weeks, about, I am positive I've got an image back twice and in a few cases, three times, to ID again. I'd notice this because there'd be some very notable trait to the photo, so it might be happening more but since many photos seem so similar it didn't strike me as being the exact same photo.

    If they mean that Intro Photo popping up on the page every time they sign in (you can see a condor with a red tag eating) - you do get that same pic each time you sign in. No matter what, Zooniverse tries to make you go through the tutorial every time you log in.

    The way around this is hit the 'all animals marked' button on that photo and skip the photo.

    Hi American Public University! Welcome! 😃

    Posted

  • zekazoo by zekazoo scientist in response to wreness's comment.

    Thank you ElisabethB and wreness! I will relay this helpful info and hopefully let you know soon if it brings them some relief from this problem.

    Posted

  • Rick_N. by Rick_N.

    I don't think I've had any repeat images, although one picture of a carcass looks much like another. I have seen the same image that I've just viewed again in other peoples' 'recent' postings within hours, which surprised me. I presumed that it would be more random. Are the images released in groups rather than the entire set?

    Posted

  • wreness by wreness moderator

    If you comment on a photo and add something to the conversation on it, it will post it up again on the "recent" page with the new comment. So it doesn't always mean someone got the whole photo again to evaluate, it just might mean someone signed in and was looking through them and decided to say something (and of course, funny stuff and sarcastic asides are always welcome:)) People do get photos and don't comment, so it's unknown if the photo was seen months ago or weeks ago but not called to attention.

    But sometimes it's obvious they did get the pic again and I've had this, too, where I got the same one pretty soon after it seems someone else did, or I see a photo of The Guy With The Nice Hand and' it's been viewed 6 times already. Lately I've had 2 photos right in a row where they were within minutes of each other. The pics are taken by motion activated cameras and I think they're taken every 3-5 minutes if there's activity (but am not positive-one of the Smart Scientists can verify this) (There's a great "Selfie" of an eagle taken over 4 frames but 4-9 minutes apart. He was posing, you can just see it)

    So I'm seeing some of this too. The pics come in from so many cameras with different venues helping to organize and collect them, then uploading them onward, then Zooniverse putting them on their site for us to see. I kind of like the "series" - it's neat to see the interactions and who is moving where. 😃

    Whether there's a rhyme or reason to it one of the scientists can clarify.

    Posted

  • Rick_N. by Rick_N.

    I'm no statistician, but I would venture that if one uploaded groups of say 1,000 images (there are 140,000?), such that the groups were numbers 1-1,000, 1001-2000, 2001-3000, 3001-4000 etc in the list to be analysed, the later groups would be more accurate. As people get more expert at counting and interpreting images, the 'quality' of decision-making increases.

    But if an image at random was number 20 in the list and another one number 128,000 and then another one number 65,789 etc that would spread any 'quality bias' among the images. It would be good (or 'neat') if a scientist were to explain this to stats 'no-hopers' such as myself. You can never find a statistician when you need one...

    Posted

  • vjbakker by vjbakker scientist

    Hi Rick-

    Great comment!

    The science team has expertise in condors, ecotoxicology, population biology, and social network analysis (ie, analyzing biological data). We've teamed with Zooniverse and their collaborators, who have expertise in citizen science data and in particular consensus analysis from repeatedly processed images (ie, analyzing data on data). Although Zooniverse is in charge of how images are doled out, I can explain the general philosophy:

    The images are put out in batches (eg, 1-1000, 1001-2000) to allow complete processing of a batch, which gives us a subset of data ready for analysis. If we put out all the images at once, we would initially end up with many many images looked at only once, and it could take a year or two before we obtained enough data to support any type analysis. So the batch processing gives us pilot data both for our research questions and for optimizing how future image batches are offered. Finally, it also hedges our bets to ensure that if users drop off and the project loses momentum we have a subset of fully processed images.

    Because these early batches can serve as pilot data for things like how much repetition of images is needed, there may be some repeated images given to individuals to assess repeatability within users. There are also lots of repeated images among users, as is stated on the site frequently, to allow assessment of convergence/consensus among multiple users. Some types of images may need fewer repeats (eg,if no animals are present). If decision making quality increases through time, as you mention, we may be able to reduce the number of repeats. Of course the nature of a project like this is that there will continually be new users trying the project out, alongside the long-time users who become more and more experienced, so there will likely always be a mix of newbies and veterans. As a result, improvements in decisionmaking quality will be correlated to the number of images an individual has processed.

    I hope this helps a little.

    Vickie

    Posted