In this article you will find all of the key terms that are commonly used at CrowdFlower.
A job is composed of a customizable interface that connects your data to an online workforce. Each job on the CrowdFlower platform has data rows, instructions, customizable questions for your use case (written in CML), test questions, and is worked on by contributors. Contributors submit judgments on the rows of data via a worker interface. All jobs in a single account can be found here and will be identified by a unique numeric id.
Typically the jobs that work best on Crowdflower follow
- Jobs are typically sizable tasks that would be unreasonable or inefficient for one person (or even a small team of people) to complete on their own.
- They can be completed from a computer but usually cannot be fully automated, or carried out by a computer.
- They can be organized into consistent, discrete steps that contributors can complete independently.
Some example of ideal jobs can be found here.
Contributors completed groups of rows at time called pages. Each page is a collection of one or more randomly selected rows of data. Each time a contributor clicks the ‘Submit’ button in a job, they are completing a page of work and will be paid for that entire page of work. If your job uses test questions, which is always recommended, each page will contain one test question by default.
Test questions serve the dual purpose of training contributors and monitoring contributor performance. Contributors are given a score that reflects their accuracy on test questions in a given job. If a contributor answers a test question incorrectly during work mode, their accuracy is reduced and they are provided with the correct answer and a reason for the chosen answer.
A judgment is the set of answers submitted by a contributor on a row of data. It is recommended to collect multiple judgments and compare them to one another or aggregate to the top response. For each job, you can specify the number of judgments you would like each row to receive. If you would like five judgments per row, that means five different contributors will need to provide an answer to every row before the job is finished.
A trusted judgment is an answer from a contributor with an accuracy score higher than the minimum accuracy you set on the settings page. All trusted judgments are included in your results.
An untrusted judgment, also know as an ‘untrusted judgment’, is an answer from a contributor whose accuracy score has fallen below the minimum accuracy set. Untrusted Judgments are not included in your results unless you specify otherwise. You will not collect any tainted judgments if you run a job without Test Questions.
The number of remaining judgments needed for the job to complete. This number will fluctuate due to contributors occasionally transferring from trusted to untrusted. This is only relevant when a job is running.
CrowdFlower has scaled to the world’s largest pool of online contributors by partnering with dozens of websites that maintain large online communities. We call these partners “channels.” Our contributors access CrowdFlower jobs via offer walls on channel websites. Examples of Channels include Clicksense, Swagbucks and Neobux.
Contributors are the people that are working on your job and being compensated. Individual contributors are identifiable by a Contributor ID.
Accuracy is the contributor’s score on test questions in a single job. If a contributor’s accuracy falls below a preset threshold, they become “untrusted”, their judgments are tainted, and they are no longer allowed to participate in that job. Contributors who maintain an accuracy above this threshold are considered “trusted.”
Each data row has a state that describes its status. The states available to a row are:
- New – has not yet been ordered
- Judgeable – been ordered and is awaiting judgment collection
- Finalized – received enough trusted judgments to be considered complete and will no longer collect judgments
- Golden – test question
- Hidden – disabled test question
CML (CrowdFlower Markup Language)
CML is CrowdFlower’s own markup language which features a broad collection of specialized questions that support a wide array of use cases. You can read more about CML in the CrowdFlower Success Center. Within CML, questions are data inputs that allow contributors to submit work through your job's user interface. They allow you to dictate the type of answer you receive for each question you ask in your job. CML provides a variety of common questions formats (e.g., text inputs, radio buttons, checkboxes, etc). At least one question is required to launch your job.
Once a job is complete, all of the judgments on a row of data will be aggregated with a confidence score. The confidence score describes the level of agreement between multiple contributors (weighted by each contributors’ trust scores), and indicates our “confidence” in the validity of the aggregated answer for each row of data. The aggregate result is chosen based on the response with the greatest confidence.
Before a contributor can enter your job, they must pass quiz mode which is composed entirely of test questions. This ensures only contributors that prove they can complete your job accurately, will be able to enter your job. Contributors that fail quiz mode are not paid and are disqualified from working on the job.
Throughput is the speed at which the crowd completes your job. This is measured as the number of finalized rows of data per hour.