Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@maoopan All shadiversity said in his video was that he likes using AI to make a…
ytr_Ugxv4glDA…
G
Imagine ai over taking control from Putin Thrump and the Chinese and spank their…
ytc_UgzZlmR5e…
G
If it was an AI he should be free to think like every human are but because he i…
ytr_UgziHxeHr…
G
We've been sleepwalking into a serious water shortage catastrophe for years and …
rdc_o85c05f
G
20:02 "Your company probably is(n't just going to replaces those roles with AI).…
ytc_UgwQ1CUIZ…
G
Guys can someone tell me if I have a problem with ai chatbots too?
I use it too…
ytc_UgwfRzjJ6…
G
If you use ai generated code and a problem comes up and you can’t solve it becau…
ytc_Ugw8N_Gmo…
G
Yes students use AI to complete their assignments. Teachers do as well--even at …
ytc_UgzuJ9_xZ…
Comment
At my former company they were all about AI and making sure the employees where all really well trained on AI and LLMs and could talk about all of this to show clients how we could help integrate AI into other companies so they wouldn't have to do it alone and /or miss the boat. I asked in a meeting once how we could improve AIs to not hallucinate and it was very obvious that not only did the higher ups did NOT want talk about that part of how AI's work (*or don't) but they where also just assuming it would be worked out in future versions somehow (but no one had any idea how to do this) also when they rolled out MS Copilot they came out with mandatory training that basically said don't use it unless you get everyone to agree it was OK for a task and then someone has to double/triple check the AIs work or that person was subject to being fired if mistakes when out .... so why would anyone even use it?!? a person can take meeting notes ONCE and not have anyone have to check for mistakes.
youtube
AI Jobs
2026-02-04T22:3…
♥ 14
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyeyE6o39TqZJpByIh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwVO9O7SHblCSARunt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxqr13Blx9kQw29gIN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzCdTDVQBUXUixDwtp4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwDlKAIMcgmi9dLK1B4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwRIR0WMueAK3MBKl14AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxFWbYy-e77R7y2guB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz1OsP9B5FlkvO42m54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwHt-GmEjUWORtx-yt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxYN9LUF3_BZADBvP54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"amusement"}
]