Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI built by humans is not out of control 😂 Human AI is like teaching a dog to si…
ytc_UgzDKriSx…
G
@Wolf-ln1ml - so far Tesla robotaxi customers in GERMANY love the Tesla service…
ytr_Ugz97nSnS…
G
I think that you are mischaracterizing their argument. AI is a tool, just like …
ytc_UgyDSrETV…
G
I can’t blame people who say it’s ai it looks a lot like ai art maybe try to mak…
ytc_UgwS3xrG5…
G
ChatGPT confirmed it! See the full answer, please!
My question for ChatGPT was:…
ytc_UgymBSIVF…
G
Tbh, as an artist, I can say ART is ART when it's handmade...there is no point t…
ytc_UgzKIXMwn…
G
Yeah basically he calls AI a tool to achieve the art, while all he's doing is ju…
ytc_Ugz1ELrM5…
G
My take from all this is that an economy without human workers implies zero cons…
ytc_UgyQYqdfX…
Comment
I think something like Terminator or I Robot is not really what it would look like, but who knows lol. I think the issue could be based around what the AI is originally programmed for and what it can realistically do. Let's say they developed some sort of AI to study climate change or something like that. Now, let's say that it determines that human activity is the cause of this. The question is, what would it decide to do with this information? Is it set up in a way where it just blurts out the answer or is it capable of going on and actually acting upon this information on it's own? I think these are the concerns if I'm understanding this correctly. I also feel that it's a massive unknown territory that this will bring us into and that is the problem.
youtube
AI Governance
2023-04-18T07:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_UgwllDnyUr9bxso3TBF4AaABAg.9od294eXX4F9odKOyyYLRo","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgyxhUeht46iRzz_WD94AaABAg.9od1PGAQVk79od3hBKn2YI","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgyxhUeht46iRzz_WD94AaABAg.9od1PGAQVk79od3q30L-Ja","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgyxhUeht46iRzz_WD94AaABAg.9od1PGAQVk79od3w5SzOf1","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxfMiDCU2PBxlGXmsd4AaABAg.9od18kojb1h9odmNfhtaXv","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgzBuprnigSX4V0fLhB4AaABAg.9od0BuBUq0W9odQsGivh5r","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgwMIlkA52R7YZnmSD14AaABAg.9od-zQA8D-U9p-IqdKxBzE","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_UgxfPasaw7yh-y5SSCF4AaABAg.9oczZKY02Pe9odeYwnvJXb","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgynpDIv1EPxp6tUFT94AaABAg.9oczNMhSM999odUON75V98","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytr_Ugw_Y-nGre8nh5ouNxt4AaABAg.9ocy6KWtRZm9odSkNMDUCB","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}
]