Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Look into who was his competitors and what tech he was bringing to the table....…
ytc_UgzAKAjnp…
G
Ai learning ma azz!! if they learning why they still need to steal artists artwo…
ytc_UgzKmhdYs…
G
If AI isn't sentient or conscious in some way, that means humans are so basic in…
ytc_UgwKIVDYA…
G
AI mastered ahitty sloppy art, sorry to all the "artists" who are affected by it…
ytc_UgwV3XTmn…
G
I have gone done my research. This is the stupidust video I ever watched about a…
ytc_UgxljXyF8…
G
Hmmm artificial intelligence after you insert micro chip in human beings,half h…
ytc_UgwhD_uR-…
G
If so many people are loosing their jobs, than who is going to pay their income?…
ytc_UgwIeb3Z6…
G
I genuinely think most AI bros think AI is better ironically as a sort of rage b…
ytc_Ugw0Wmn1v…
Comment
As a programmer I can confirm that chatGPT can and will confidently give you blatantly wrong and dangerous information, never trust chatGPT
The reason is simple, chatGPT, much like humans, doesn't know everything, but unlike humans, chatGPT doesn't have the ability to know when it doesn't know something, so it just makes shit up, this is an inherent weakness of LLMs and is likely impossible to solve
LLMs are also vulnerable to being fed wrong, misguided, dangerous or outdated data as part of their training dataset, which means ChatGPT can also just have incorrect information, or be unable to distinguish what's correct or what isn't, or conflate two similar but also completely different concepts
youtube
AI Responsibility
2024-08-09T20:1…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzSEi0hMtaKakgNCkl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzNPfsgUcD7Otxzpy94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx0I6r0k8t0K74FoVh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"amusement"},
{"id":"ytc_UgzwEtJID-cvhSNkPTR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"amusement"},
{"id":"ytc_Ugz4Pt7OA6Pn4wTrS4h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxLWCE696BnpYou9m94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzNEh4-20nXRWDXLZt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxxXM-0GOV_fmYM3YV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugxw5UVElJPdkvdDAHB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz67E1UQsjmlZe0rYZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]