Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Maybe this is really the age of Aquarius and all of humanity will be able to enj…
ytc_UgwdyDSLx…
G
he copy pasted from public ressource and it took minute for that i can copy past…
ytr_Ugxl-x21w…
G
now now....this is also a sigh of relief....I can do anything and blame it on de…
ytc_UgzZKfXgK…
G
What would motivate AI to grow? Can machines be "motivated" to do anything or ju…
ytc_Ugz0OyaE4…
G
@Cqat1 image generation AI isn't really limited to the "prompt to image" though…
ytr_Ugw9c-pD9…
G
Not true.
The overriding promise for his first election was simply - Developme…
rdc_j4xxql8
G
I'm sorry but the furthest this conversation got to was hire had it is that some…
ytc_Ugy2i5YoJ…
G
Honestly, it's pretty obvious that Sophia is just a chat bot with pre-recorded a…
ytc_UgyiXr4jI…
Comment
I have worked in high tech for about 50 years, 15+ industries, aerospace included, with a huge amount of software development. AI is dangerous... minimal controls around its development, and no government oversight and laws to make sure it doesn't kill/harm people and property. The Shuttle software development team was a CMM L5 team that did extensive QA and testing before using it to launch a shuttle. Updates to software can not happen for space systems and nuclear weapons (as examples) ... there are no good times to do a 'fix' with an update/patch to software for systems like that. The permutations of changes that a driverless car, or truck needs to account for are too challenging, and people have died, and will continue to die without a robust and oversight process before those systems are initiated.
youtube
AI Jobs
2025-11-01T03:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzoGC6V0mg2gloJkIV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzUu_QejR7yHw8AhyJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxSM7Wr4YGv7Gmm_ed4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_UgxytrAlXw_U87uMkxp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgypNA5SuR5OfBAQGvt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwe9HBeEKA3F-1JFNd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyQh5zjGZvJTCZoCsx4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwRt991CuMRCrEz9Td4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx7plp6hv8lHuQul8F4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzOgbEol2j64FYsv4J4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]