Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI is not good for humanity, it’s a bad creation,, I always know it, everything …
ytc_Ugy7lWJlj…
G
They probably need to because they are often get poor [economic deal from EU or …
rdc_et7uufp
G
I have a friend who took program code straight from ChatGPT for other people's p…
ytc_UgzwpDXPM…
G
Totally agree with you! I think that AI's performance will be so advanced that t…
ytr_UgxKHlDHr…
G
I knew it was AI in an instant and I'm a guy. Humanity is not doomed, you are.…
ytc_Ugw3ykvVQ…
G
Oh thats great.
Self driving trucks, so whos gonna navigate human to human conta…
ytc_UgwSEUWzH…
G
This is pretty much what everyone was warning about over the last few years. An…
rdc_luwsrri
G
Im calling bs. The bullet holes do not line up with the robots aim. Also the 1…
ytc_UgzYdy5PP…
Comment
they do touch upon the subject if you listen to the full hearing. the senate asks "how long until AGI?" - the general response was "we don't know, improvements to AI happen quickly so it's important we get ahead of this." they also talked about the dangers of plugging current AI systems into something like military weapons, letting an AI freely perform tasks on the internet, as well as ensuring that you shouldn't be allowed to release AIs that can self replicate. the general response to all these concerns was that companies creating an AI could potentially require a license if you are serving a user base larger than X number (the example given was 10 million to 100 million users). another topic that was brought up was to create a government/global agency that enforces these laws as well as having a moratorium of scientists that work for the government to ensure agencies are keeping up with the advancements in AI so the correct things are enforced as time goes on
youtube
AI Governance
2023-05-16T22:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytr_UgxCcb86nnkWt4pW-kF4AaABAg.9oY-SbehkMZA5tZrjAjS_h","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugz_4uL68XEflzgsmqp4AaABAg.AOOgVTlWzF_AVT1dcVMYFy","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytr_UgzY3g868RP0sUhVc9t4AaABAg.ANXkw-RgebbAVT1HK-WmUU","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytr_Ugyxlb6OCotXBHPw7rN4AaABAg.ANE2uD72vldAVT0a8H-SxY","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_Ugyxlb6OCotXBHPw7rN4AaABAg.ANE2uD72vldAVT0nYC3Luw","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytr_Ugz4exGNsbg8Xwuu7hp4AaABAg.AN9GrtnEQIUANG1p_F6Xh1","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytr_UgwWkqbHIEzsp5dv6at4AaABAg.9poTbhxQYI_9prl_C5QS55","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgyaW4tT9gJkU9kk7Y14AaABAg.9pnH7lNDcvg9pnMCaVvC9H","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytr_Ugw9QOB0a6S3_ROMy_V4AaABAg.9pn3_3MntjT9pn4wZNU7r_","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_Ugw9QOB0a6S3_ROMy_V4AaABAg.9pn3_3MntjT9pnhwrZkWP3","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]