Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The purpose of AI is to eliminate the workers and replace them with fully automa…
ytc_Ugyf6mZmk…
G
Why would they label the product "autopilot" if it cant be used autonomously? Th…
ytc_Ugy1bPm0j…
G
@keepthatawayfromme7471 thank you for the reply and a lot of games don’t actual…
ytr_UgwOF-8DC…
G
I'm glad people stand up to this AI art nonsense I don't really understand it an…
ytc_UgzONoapQ…
G
I know how to fix this... Pass a law that says you are innately the sole owner o…
ytc_UgzfzikjZ…
G
i use AI to generate shitty and very fucked up poses and then use my own knowled…
ytc_Ugw2KSThc…
G
It's funny how they use this argument as though using generative AI to create "a…
ytr_UgyImAL0j…
G
I liken this conversation to the debate around self checkout machines replacing …
rdc_n81o5sq
Comment
To all those people who are talking that AI can replace CEO and board members, I want you all to think again
AI is a tool and can make mistakes and can have hallucinations this can lead to wrong decision and making wrong decision as an employee can be dangerous however making mistakes as the decision maker of the company (CEOs and board members) can be fatal
Think about it, there're many AI for coding yet companies hire coders because generating code is one thing and debugging it is another most of the time a code generated by AI has a bug or is not scalable
So AI may replace those whose jobs does make much impact if they took a wrong decision but they would require someone to keep them in check
otherwise they will start ruling us humans because they are able to replace the highest management and guess who take the decision to replace humans from top management (its the top level management only)
However if a company can think they can replace whole human workforce then I can bet you can't see that company in next 5-6 years
youtube
AI Harm Incident
2025-05-29T03:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | industry_self |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzO28ymZanbsniI9WZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugzj5gJ0UjRJ8XFZOF94AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgykHWAj16yx0_9UhYl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwu5bxVK1EMdH1-nBd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwWEjeMl7ZUosUEPJB4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzil3f37ZYf63S_wtN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwfSaIk7fI2Ss_p2Fl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"industry_self","emotion":"fear"},
{"id":"ytc_UgzZE2NVv8A4bT-HGWJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgwkQ611NB1TGiT_Hzl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxOfd0ZunKbxx29mzJ4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"outrage"}
]