DH6034

CROWDSOURCED PARTICIPATION AND REFLECTION

The data collection for this project is to support the ai model developed by restor, which was set up to facilitate the funding of projects such as nature restoration, and to provide openness and transparency in the use of funds during the course of the project in order to avoid wastage of funds. The model is designed to increase the public’s confidence in the restoration work on the ground and to better protect nature. The model is designed to facilitate the later identification of trees from drone images for canopy delineation, and to provide a basis for the later modelling with data from the public labelled trees, in order to teach the model how to differentiate between a single tree and a group of trees. To more quickly obtain the location of trees as a way to check if they are still there and how they are growing.

The project can also be used for educational purposes to help the public understand the importance of ecosystems and ways to protect them, as well as to serve as an indicator for monitoring the health of ecosystems. By observing the growth of labelled trees, the state of their leaves and their interaction with other organisms, the health of the ecosystem can be assessed and timely measures can be taken to protect the ecological balance.

I undertook a pre-marking guide after selecting the project, marking trees trees, in leafy and leafless, or clusters. With this I began the first few sheets of task marking, but I soon discovered that the task was not as simple as it seemed, as with much of the greenery, task marking based on the subjective vision of the human body, except for the large, separate trees that I could immediately distinguish, many others I struggled to identify as either a tree or a bush, and so made it even more difficult to determine the centre of the tree. As time went by, labelling such small and trivial details started to annoy me, due to lack of knowledge in the relevant areas, such as closed crowns, dead trees, trees without leaves, etc. Therefore I struggled to complete the task of marking 4 diagrams, had it been any other participant, would it not have resulted in a state of random marking, which would have resulted in inaccurate data and failed to meet the needs of the later modelling.

Fortunately, for the individual, the knowledge gained from different disciplines facilitates thinking from multiple perspectives. The first thing that has to be mentioned is that some of the knowledge gained during the exposure to the project was in the related fields of canopy closure and Crown shyness; when the canopies are in contact or very close together, it is called a “closed canopy”. Measuring canopy closure during the project was a good way to assess the progress of restoration, with more than 60 per cent canopy closure considered successful restoration. Crowns are shy, their crowns do not shade each other and form a furrow-like opening.

For data projects, there is an issue to consider: the marker is a non-relevant industry person, and it may be that they don’t even do the detailed pre-guide reading, or if they do read it they don’t follow the reading guide exactly when the task is carried out. This will lead to data being taken incorrectly.

In the course of the task, I could not help but marvel at the fact that this is a very cost effective model of working, where tasks are completed at a relatively low cost by simply assigning them to a large number of volunteers; thus allowing for the mobilisation of people from around the globe, or in other words, achieving 24-hour productivity. But at the same time, the quality control issue is a great challenge, which is intuitively obvious from my own participation in the process, as I lacked the relative experience to distinguish the centre of the trees to determine the number of trees. At the same time, the lack of patience due to the tedious searching process made me easily tired of the task. Therefore, designing a suitable incentive mechanism is not an element that should be considered. However, the volunteer work itself depends on the volunteers themselves, non-compulsory, and probably does not need to be like a game pass, all that is needed is to complete the task as accurately as possible to improve the accuracy of the data.

It is obviously inappropriate to arbitrarily draw conclusions and propose judgements through a project, so I picked the project of finding the Martian Nebula. The difference is that I have been very patient with the labelling process for this task, due to the abbreviation of the images or rather the reduction of the number of observations required for the task. In the same way as for trees, it is sufficient to mark the nebulae that you think are accurate: look for two distinct legs and a peak, characterised by bright vertical lines and a bright top, with a distinctly dark area in the middle.

 

Another difference from the previous data collection was that the data was presented in four different images at different resolutions. So that the volunteers can make comparisons with different images and at the same time mark on different images. The comparison of different images seemed to be a longer duration of patience as opposed to the exhaustion and lack of patience caused by searching hard on one image.

As a result of the narrowing of the search area and as demonstrated by the four different pictures, the accuracy of the search area increased. This prompted me to wonder if it would be possible to conduct almost 3-5 pictures of the task marker test before the volunteer task begins in order to facilitate the accuracy of the task participants and make it easier for the usability of the data collection. An accuracy of seventy to eighty per cent would be sufficient. A degree of pre-testing before data collection begins is beneficial to weed out as much of the uncontrollable as we can up front. Problems such as random labelling.

For a major in digital arts and humanities, I can think of projects about manuscript transcription while I searched the site for guidance on transcribing Spanish manuscripts. In fact, I don’t really recommend manuscript transcription for crowdsourcing projects. Firstly, the project requires a high degree of professionalism, although the Spanish Manuscript Transcription Project mentions that the manuscripts will be transcribed several times and then compared before being formally transcribed. However, these transcribers do not have the appropriate expertise, especially for manuscripts that are highly specialised in a particular field. Even for manuscripts that are not field-specific, the project is the most time-consuming and labour-intensive in terms of complex or poorly written manuscripts, which can be very challenging.

Instead of collecting data on specific things or images, why not collect ideas from the general public, which doesn’t depend on any relevant professional background support, the only thing needed is imagination. This reminds me of a factory in China that makes Muppet toys for sale, as you all know, all industries are extremely competitive in China and you need to work very hard to maintain a normal level of not falling behind.

The owner of the factory followed the trend of the TikTok selling model, and caused an unintentional move to create a physical creation of a strange idea proposed by a netizen. Inadvertently, it became popular on the Internet, so the factory’s muppet sales went down the road of creativity, i.e., all the muppet designs came from the public’s creativity. In addition to going off the beaten track, the act of giving feedback and responding to the opinions of the netizens has greatly enhanced the stickiness of the users and increased the popularity of the factory.

Brahbam suggests that an alternative crowdsourcing solution to ideation problems is “peer-vetted creative production”[1] (p.49). The approach of collecting ideas from netizens during the ideation phase of product design, and finally discovering the best ideas through peer review, is useful in addressing issues of design taste and user preference.Threadless.com is a good example of these ideation processes, where the crowd comes up with ideas for the design of a product, media content, or physical space. Since the crowd is the end user of the product, media content or space, they have the right to choose the best idea.

 

Reference

Brabham, Daren C. Crowdsourcing. The MIT Press Essential Knowledge Series. Cambridge, Mass.: MIT Press, 2013.

Leave a Comment

Your email address will not be published. Required fields are marked *