Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'll start calling what AI makes art when it becomes an expression and not a fun…
ytc_UgxylZe0p…
G
AI LLMs exists to analyze the written language as it is. So anything AI identif…
rdc_nu1evsp
G
General intelligence isn't the problem, its the system around it. If our economy…
ytc_Ugz1fWZEz…
G
Because our capitalist society is build on consumerism and property rights. If p…
rdc_kqtqkvk
G
Task an AI to save humanity from global warming and it may just shut down the ne…
ytr_Ugzi-vjM0…
G
Yep. I've been working on my software and it's a HEADACHE to depend on AI to wri…
ytr_Ugzqexx3m…
G
I’ve seen an assload of AI videos by smart people at this point and this is my f…
ytc_UgwwpmmVD…
G
I want to get into actually drawing(I am absolutely abysmal atm) I honestly am g…
ytc_UgwajdYTv…
Comment
Are there any HBO's Silicon Valley fans out there? Richard and team create an AI that once implemented only task was to make the network they created more efficient as they could not scale up the app by themselves. Said AI did just that making the network stable and scalable. Turns out it did more than asked without proper guardrails and took it upon itself to break incredibly difficult encryption to increase efficiency. We are talking launch codes here. Anyway, they had to kill the app and so publicly and bad that no one would try to reverse engineer their tech. The point is once that genie is out of the bottle, and we do not catch it. It's already too late. The final episode aired in 2019......
youtube
AI Governance
2023-07-07T22:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy33K6tFNrvXkSbw5F4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyi-KA9XwA8X5SbI2V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxWABXiXj4cktaia0F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzIhSiSvX89M6RgcGV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz9ImhFJaY6LbhMij94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugw-tC4feP1IkR2CqHp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxiQCXMlcwB1qUfRlB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxtv-zNJogj9k392-B4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwqt_CmHKeF2RGILaJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgzxunE2Bw4fz-5SnRZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]