Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Does that mean AI is subject to copyright laws? It can't just use unlicensed sof…
rdc_oh29wos
G
@static7579 I can see what you mean, and I do think it’s important that people s…
ytr_UgzSDQ4Yd…
G
@SusCalvin I am working as intern at Fintech company here in india , Tools such…
ytr_UgwVSDl4U…
G
That was a Thompson sub machine gun which was produced in 1920. I appreciate the…
ytc_UgzCUuVeC…
G
Not gonna lie, ai art looks shallow as hell, and it lacks the character that hum…
ytc_UgzKn_wPq…
G
Uh, now that everyone sees what ChatGPT is capable of, I think we are doubting t…
ytc_UgwVn3xtx…
G
The only reason the robot said this is because they programmed it to say that fo…
ytc_Ugwe9y6ZF…
G
@witerunguard1737 Ai is not making humanity better
It’s just makes large greedy …
ytr_UgxX5ZQHn…
Comment
No they didn’t misunderstand that actually. They literally addressed the possibility of that exact scenario within the article.
>>”The report also raises the possibility that, ultimately, the physical bounds of the universe may not be on the side of those attempting to prevent proliferation of advanced AI through chips. “As AI algorithms continue to improve, more AI capabilities become available for less total compute. **Depending on how far this trend progresses**, it could ultimately become impractical to mitigate advanced AI proliferation through compute concentrations at all.” To account for this possibility, the report says a new federal AI agency could explore blocking the publication of research that improves algorithmic efficiency, though it concedes this may harm the U.S. AI industry and ultimately be unfeasible.
The bolded is interesting tho because it implies that there could be a hard-limit to how “efficient” an AI model can get in terms of usage. And if there is one, the government would only need to keep tweaking the limit on compute downward until you reach that hard limit. So it actually is possible that this type of regulation (of hard compute limits) could work in the long run.
reddit
AI Responsibility
1710737494.0
♥ 28
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_kvdxu7t","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"rdc_kve18sa","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_kve4efh","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"rdc_kve4fw3","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"rdc_kvdxhgv","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]