Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
UH UH I did NOT JUST SE A ROBOT LOOKING AT YOU WHILE YOU WERE RECORDING THEM AT …
ytc_UgwGBq1Y7…
G
If there won't be another AI revolutionary breakthrough like actually going from…
ytc_UgyqhWhMd…
G
Ai can be helpful I won't deny that. But when it comes to art be it digital, mus…
ytc_UgyyJ_SR1…
G
What is not being talked about is how much more efficient AI will be at managing…
rdc_kiu04dh
G
When have governments / higher ups ever done anything to benefit the human race?…
ytc_UgwQY1kJ0…
G
Call me old fashioned, but when I learned computing, Computers controlled what t…
ytc_UgyKrcUr-…
G
All these arrogant clowns are working with AI and they don't know how quickly AI…
ytc_Ugy2kv0oN…
G
What happen to open source philosophy? When you don't know what actually running…
ytc_UgyCe0LoR…
Comment
As awful as this situation is, I’m glad that it’s bringing public attention to what AI is not. Namely that it is NOT inelegance. It’s a language model designed to analyze a dataset for patterns and then predict what words might come next in a conversation. As an end user, you can’t know what that dataset includes, but you can know that this software doesn’t know what words mean, much less understand legal arguments. Companies want us all to believe that it’s magic, that’s ready to take the reigns in a number of industries, and it’s just not. It can’t replace human reasoning or deduction, no matter how large the dataset.
youtube
AI Responsibility
2023-06-14T05:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugy8fSXTAAcLUBhKJ5V4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzrNE6_jSVcYpID1Ch4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwORgEYZGh0IQXW4jV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzlCp7EngYhDQ3T8rh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxc3Iy0wJE1z9EMygJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxB9U2aF8WNaeGYCwR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw49GMtyc371L4mpsZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzYWSwuYwr58A1KtXZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxWnsdQJIDZtRS5H-N4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxyKoB_uyhtR97AeHJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}
]