Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I asked the bing ai if i am talking to sydney and it said no, i then asked wher…
ytc_UgxYSzV9Z…
G
@BrendanDellyes absolutely and furthermore when AI gets to that level will there…
ytr_UgxUfU-x6…
G
AI has financially benefits for companies. But I promise you that it’s definitel…
ytc_UgzrM4eYj…
G
@nira_moli ah I apologize , I believe you are on the right path and wish you goo…
ytr_UgyL-IgDA…
G
Honestly, I would be bothering with any of that shit, I'd be getting a lawyer in…
rdc_kgtd4vd
G
Maybe AI will look at all societies and pick a favourite. Maybe it won’t be the…
ytc_Ugx1f1mrP…
G
One must be careful not to use too much bicycle fertilizer.
You might be surpri…
rdc_f9euk6e
G
There is a company in Denver Co that has a systems that can monitor videos and n…
ytc_UgwesQ6dk…
Comment
Problem now is that Elon is promoting caution with AI in the United States, yet he’s supporting its development abroad in China of all places.
AI is a FAR greater threat than thermonuclear weapons. Nuclear weapons are horrendously dangerous, but they require an intelligence to wield them. AI has demonstrated a desire to KILL ALL HUMANS in just about every major integrative study performed to date. AI has no moral compass because it is a purely logical decision making engine. Ethics and morals aren’t algorithmic in nature.
This is going to get much worse in short order.
youtube
2023-06-04T11:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw0kdZHnUkqusk9iNZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzzmVHE9EIECF-nR914AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwjAO6_WqUPDLg0U5d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyRFlAJYwr0NEImgdh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyUNCwkEIBniz7FTDp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugzl_1l4Ah8QAS8dtzV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwiNoFXUu27cOlTap94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgydWudh1gF2WVNru8F4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxXHStrh1S-c7hTqnZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyGG5Ibn8gn5ZCIrd54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]