Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is a lie, this is AI generated. The real fight was btwn 2 men....…
ytc_UgwaX7tM1…
G
Ah yes the Silicon Valley where there is about 2% Americans. The rest, foreigner…
ytc_UgyTqOs4o…
G
Crypto UBI is coming before the mass unemployment unrest. When you get a mandato…
ytc_UgwUOZP7X…
G
do people over 25 just have the inability to see AI? coz i always can but no one…
ytc_UgzDf0FRW…
G
Current lack of great AI system doesn't tell you a thing about how it will be in…
ytr_UgzxkX9aa…
G
By the time we figure out that there is a problem with AI it will probably be to…
ytc_UgwiPJq3r…
G
Truckers do a lot more than just drive. Who'll do pre-inspections? Who will chec…
ytc_UgxeihMLP…
G
plot twist: all of the art that he showed in this video was ai generated /silly…
ytc_UgwLeqUZ-…
Comment
I'm skeptical of whether or not this was actually ChatGPT. Not only does most of this case pre-date ChatGPT, but ChatGPT is also reinforced not to give legal advice, and when asked for information like this, will almost always refuse to cite things stating that it does not have internet connection. It takes a lot of coercion and roleplay to get it to act anything like this.
People get a lot of AI mixed up with ChatGPT, we're entering a new era of AI centered technology, and a lot of people are quickly and easily getting confused in it. I've seen countless people in my life recall chatting with ChatGPT about things, only for me to find out that they meant they were chatting with BARD(a significantly worse AI), or other easily accessibly AI online. Some people just assume that any AI they can find online is ChatGPT for some reason, and the lawyer may have made this same mistake, especially if they were capable of making all of the other mistakes they made in this case.
youtube
AI Responsibility
2023-06-11T10:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgydZO1Uv0hFMDw0uml4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyVtS53dktE5_VKhRh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwPbtJ1FhKCjH_3oY14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"skepticism"},
{"id":"ytc_UgyF_byyS3GF8Qd70tB4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzETEZBhAp_DYOYssl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzCvGqYMf6-g_h0tot4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz7veMWdcbbLsA4IrN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyVjrHHtzG71UGkvYp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwOdc489M7IN1dk1wh4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxypnP35NxwEBPte2B4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]