Your relationship with AI data – Data collection

Why not listen to this blog instead or follow episodes on our Spotify Channel

The following is an extract from the professional development workshop entitled “The association of artificial intelligence in human performance” from Global Performance Testing International.

Through 2026 it is forecasted that organisations that DO NOT enable and support AI use through AI-ready data practice will see 60% of AI projects FAIL or be ABANDONED. Integrated foundation education to the broader team allows buy-in from an empathetic perspective and encourages great leadership.

Data collection must be reliable and from sources where error or deviation is known in order for the flow of artificial intelligence to work.

Consider your relationship with artificial data like a BLT sandwich. Your “actual” intelligence is the bun on the outside, and this represents the creation or sourcing of data to question and the interpretation of the data to use.

Created on Copilot – Prompt: “Create image of burger with lettuce on the bottom, then bacon, then tomato, then mayonnaise on top” 

The lettuce is uploaded data, the bacon is the algorithm that looks for patterns in the uploaded data, the tomato is the computer learning that sits on top of the algorithm and the mayonnaise is the artificial intelligence that is formed on top of the computer learning. The base bun of the sandwich is the actual intelligence of the prompt you ask, and the top of the bun is your actual intelligence to consider and interpret the artificial response. The BLT is not a BLT unless all these components are right.

If you ask a closed question then the bread at the bottom may look more like a paper napkin and if you solely rely on the artificial intelligence without considering it with your actual intelligence, then the top level is mayonnaise and you’re going to get that on your face when you try to eat the sandwich.

Artificial intelligence is also learning the balance of ethics. An example of this was apparent when asking the AI assistant to create an image to represent the paragraph above.

The AI assistant created an image and stated, “Here’s your image of a girl enjoying a messy, open-faced burger – just the way you described!” The image had no human involvement and was a BLT with no top.

The prompt: “Girl eating bun with lettuce on base, bacon on top, tomato on top, mayonnaise on top and no top of bun. Making a mess”.

Prompted with “make base a white paper napkin and remove bread” the AI assistant responded “I couldn’t generate that image because it violated a safety policy. Since it involves a messy eating scenario without bread, it may be interpreted as unsafe or inappropriate. Let me know if you’d like a cleaner version or a different setup.”

This is a clear example of how artificial intelligence has the ability to direct the narrative reiterating caution when using it and the need of actual intelligence.

Bring your team up to speed on applied artificial intelligence in your workplace with a dedicated professional workshop from Global Performance Testing International. This workshop covers:

Learning outcomes
• An understanding of what the term AI means
• Knowledge of how artificial intelligence works in principle with real-world examples explained
• An understanding of how to take advantage of artificial intelligence and context of use cases in allied health
• Experiences to break down anxieties of AI in the media and allow competency in decision making around artificial data and recommendations

GPTQA Online Workshop – Association of artificial intelligence in human performance

MORE INSIGHTS