Intelligenza artificiale vs motori di ricerca: l’inizio di una nuova era?
Dall’arrivo di ChatGPT al calo delle ricerche su Google: come l’AI generativa sta cambiando il nostro rapporto con le informazioni online e mettendo …
Google's AI Feature: Hilarious Inaccuracies and Serious Concerns Google recently launched a new AI feature integrated into its search engine. While intended to provide quick answers, this feature has generated both amusement and alarm due to its propensity for inaccuracies. The AI's responses, while sometimes humorous, raise concerns about the spread of misinformation. One example, highlighted in a recent viral video by content creator The Chainsaw, involves a search for why cheese doesn't stick to pizza. The AI's response suggested adding non-toxic glue to the sauce. Another example shows the AI providing dangerous first-aid advice for snakebites. The Chainsaw states, "Google's AI apparently recommends cutting the wound or attempting to suck out venom." This is directly contradicted by established medical advice. While the AI's responses are often amusing, the potential for the spread of inaccurate and potentially harmful information is a serious concern. The video prompts reflection on Google's past commitment to "Don't be evil," questioning whether the AI's current performance aligns with this principle. The incident serves as a cautionary tale about the limitations and potential risks associated with rapidly developing AI technologies. Google has acknowledged the issues and stated that they are working to improve the AI's accuracy. The incident highlights the importance of critical evaluation of information found online, particularly when it concerns health and safety.
Dall’arrivo di ChatGPT al calo delle ricerche su Google: come l’AI generativa sta cambiando il nostro rapporto con le informazioni online e mettendo …