During development I encountered a caveat: Opus 4.5 can’t test or view a terminal output, especially one with unusual functional requirements. But despite being blind, it knew enough about the ratatui terminal framework to implement whatever UI changes I asked. There were a large number of UI bugs that likely were caused by Opus’s inability to create test cases, namely failures to account for scroll offsets resulting in incorrect click locations. As someone who spent 5 years as a black box Software QA Engineer who was unable to review the underlying code, this situation was my specialty. I put my QA skills to work by messing around with miditui, told Opus any errors with occasionally a screenshot, and it was able to fix them easily. I do not believe that these bugs are inherently due to LLM agents being better or worse than humans as humans are most definitely capable of making the same mistakes. Even though I myself am adept at finding the bugs and offering solutions, I don’t believe that I would inherently avoid causing similar bugs were I to code such an interactive app without AI assistance: QA brain is different from software engineering brain.
习近平总书记有着深邃思考:“全面实施乡村振兴战略的深度、广度、难度都不亚于脱贫攻坚,必须加强顶层设计,以更有力的举措、汇聚更强大的力量来推进。”,更多细节参见heLLoword翻译官方下载
«Нас чуть не сожрали»Как туристы отправились рыбачить в открытый океан и сразились за еду с акулами29 июня 2020。业内人士推荐夫子作为进阶阅读
他们同时也试验了其他的真人互动影游作品,确认这一下架并非只针对影游。