Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Could it be that Yudkowsky is the most irritating, tedious, obnoxious, and hyste…
ytc_UgzaIf0jF…
G
Until it starts making stuff so good we will feel like an idiot for complaining,…
ytr_UgzFfPDOG…
G
38:00 why not make a super intelligence with its only goal being ai safety and t…
ytc_Ugz79UJE8…
G
Lmao. “Deepfakes is a global issue”
So is Human trafficking and homelessness 🤣🤣🤣…
ytc_Ugy3Fd7Z8…
G
People need to WAKE UP FAST to all the lies that we are told and stop believing…
ytc_Ugz76ZS76…
G
Humans have been the smartest thing on the planet for a really long time. The id…
rdc_mzy836p
G
If you wouldn't use chatgpt for brain surgery than you shouldn't use it to keep …
ytc_UgwgSp_er…
G
@Landgraf43 *Looks at climate change*
They may not want to die, by they are ok …
ytr_Ugz5ug6u_…
Comment
There is an AI being developed that basically scans cases to help a lawyer find cases. This AI is trained to interpret what the lawyer is looking for, search through a database read all the documents and see if it thinks a case meets the criteria. Then output a listing of cases for the lawyer to specifically look at and see if they actually meet the needed criteria.
But even that AI has a disclaimer that it is not perfect and that it’s only giving suggestions to aid the lawyer and or legal team in their research by pointing out cases that might help.
youtube
AI Responsibility
2023-06-11T13:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzF5EcgYJ9F4oTAF7l4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyXqjdeunxkSq02cLZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz3DAITwKJDcAk-6GV4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"unclear"},
{"id":"ytc_Ugwt80nqRCC8O4IIYMx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx7int5xfY9YJyPtmp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzA3ubUQBC4fPmDMJ54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxju3omli0ZJzfx2u94AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyZyx6HuAQU2ekNNM14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz80jrcP-H2uDzvDEZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzXGWq9drkiPM0BRpx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}
]