{"id":45248,"date":"2024-05-27T18:09:52","date_gmt":"2024-05-27T22:09:52","guid":{"rendered":"https:\/\/www.technewsday.com\/?p=45248"},"modified":"2024-05-27T18:11:28","modified_gmt":"2024-05-27T22:11:28","slug":"45248","status":"publish","type":"post","link":"https:\/\/technewsday.com\/staging\/45248\/","title":{"rendered":"Google suffers another embarrassing AI launch.  Hashtag Trending for Tuesday, May 28th, 2024"},"content":{"rendered":"<p>London Drugs refused to pay ransom and data is leaked. Another epic AI fail from Google, Meta\u2019s chief scientist tells students that large language models aren\u2019t worth studying and more bad news on the risks of kids and online activity.<\/p>\n<p>All this and more on this \u201cshocking truth\u201d edition of Hashtag Trending. I\u2019m your host, Jim Love, let\u2019s get into it.<\/p>\n<p>London Drugs has confirmed that some of its corporate data has been leaked online by the LockBit ransomware group. This follows the Canadian pharmacy chain&#8217;s April 28th cyberattack that forced temporary store closures.<\/p>\n<p>LockBit demanded a 25 million dollar ransom payment. Although there were rumours that the company had agreed to pay or was at least negotiating, in the end they did not pay the ransom.<\/p>\n<p>In a statement, London Drugs said quote &#8211; &#8220;We are aware that some of these exfiltrated files have now been released&#8230;London Drugs is unwilling and unable to pay ransom to these cybercriminals.&#8221;<\/p>\n<p>The leaked files, around 300 gigabytes in size, include human resources records, medical notes detailing issues like sexual assault, financial data, legal documents and more.<\/p>\n<p>Once cybersecurity analyst, Brent Callow, likened the data dump to kidnappers killing a hostage after ransom demands went unmet.<\/p>\n<p>&#8220;This is like kidnappers killing their hostage. They&#8217;re giving up on being able to monetize the attack and are releasing the info as a warning to future victims,&#8221; Callow stated.<\/p>\n<p>London Drugs maintains there is still no evidence that customer data or primary employee databases were compromised. However, impacted corporate staff will be notified and offered assistances such as credit monitoring services.<\/p>\n<p>The company has also committed to a full investigation and disclosure to any affected people of precisely what data has been leaked.<\/p>\n<p>The company continues working with law enforcement on the investigation.<\/p>\n<p>Source include: <a href=\"https:\/\/www.cheknews.ca\/deeply-distressing-hackers-leak-corporate-london-drugs-data-1205880\/\">CHEK News<\/a> and the Times Colonist<\/p>\n<p>Yann LeCun, Chief AI Scientist at Meta, had some contrarian advice for students looking to get into the AI space at the VivaTech conference in Paris. LeCun stated:<\/p>\n<p>&#8220;If you are a student interested in building the next generation of AI systems, don&#8217;t work on LLMs (large language models). This is in the hands of large companies, there&#8217;s nothing you can bring to the table.&#8221;<\/p>\n<p>LeCun, a pioneer in the development of convolutional neural networks, instead urged students to focus on developing next-generation AI that can overcome the limitations of large language models like GPT-3.<\/p>\n<p>&#8220;Eventually all our interactions with the digital world will be mediated by AI assistants. This will be extremely dangerous for diversity of thought, for democracy, for just about everything if a small number control it all,&#8221; LeCun warned.<\/p>\n<p>His comments come as the AI community is divided on the future trajectory &#8211; whether to continue scaling up transformer-based language models or explore new architectures entirely. Some experts believe moving away from transformer models could lead to breakthroughs comparable to GPT-4.<\/p>\n<p>However, large language models continue advancing rapidly, with models like GPT-4o demonstrating multimodal capabilities to understand video and audio natively. As Sam Altman stated, training data may no longer be a bottleneck for further scaling up these models.<\/p>\n<p>Despite that, Le Cunn\u2019s advice rings makes a lot of sense \u2013 but it also exhibits a rare candor \u2013 the large companies may very well have a virtual monopoly on these models and that should not only serve as a warning to students looking for future avenues \u2013 it should be a warning to us all.<\/p>\n<p>Sources include: <a href=\"https:\/\/analyticsindiamag.com\/yann-lecun-advices-students-getting-into-ai-space-to-not-work-on-llms\/\">Analytics India<\/a><\/p>\n<p>Google&#8217;s &#8220;AI Overviews&#8221; \u2013 a new experimental AI search feature has led to yet another publicity nightmare for the company when it provided inaccurate and nonsensical responses to some queries.<\/p>\n<p>The AI-powered tool, was supposed to summarize and provide insights from search results, has told users things like using &#8220;non-toxic glue&#8221; to help cheese stick to pizza and that geologists recommend eating a rock per day to ensure they got their supply of necessary minerals.<\/p>\n<p>The glitches have been widely mocked on social media. In one example, a reporter searching if gasoline could cook spaghetti faster was told it could be used &#8220;to make a spicy spaghetti dish&#8221; and given a recipe.<\/p>\n<p>These answers, by the way, come from satirical sites like the Onion or old posts in sites like Reddit.<\/p>\n<p>A Google spokesperson acknowledged these were &#8220;isolated examples&#8221; and &#8220;aren&#8217;t representative of most people&#8217;s experiences&#8221;, stating &#8220;The vast majority of AI overviews provide high quality information.&#8221; However, the company said it has taken action where violations occurred to refine its systems.<\/p>\n<p>The struggles highlight the challenges of deploying AI search capabilities that need to handle any query accurately. As Pedro Domingos, a professor of computer science stated: &#8220;We don&#8217;t know how many searches it got right, because they&#8217;re less funny to share on social media, but AI search clearly needs to be able to handle anything thrown at it.&#8221;<\/p>\n<p>It&#8217;s easy to collect some errors and amplify them, and no doubt, like earlier AI failures, there is a small army of people trying to get a non-sensical or humourous answer to a question.\u00a0 But in fairness, if you asked these same questions of perplexity.ai, a google rival, you would indeed get accurate answers. Humans shouldn\u2019t eat rocks. You leave your cheese out at room temperature if you have problems with it not sticking.<\/p>\n<p>The ability to distinguish between satire or humour and factual posts should have been something that Google\u2019s design team should have least have considered.<\/p>\n<p>The problem is not that Google has failures; it just seems to always fail on simple things that should\u2019ve been easily caught. \u00a0And I\u2019m not sure they\u2019ve had a major launch without one of these epic backlashes.<\/p>\n<p>Part of it is how they set themselves up. We\u2019re Google. Here\u2019s our big deal.<\/p>\n<p>Maybe, just maybe they could have released this and said it\u2019s new and its going to make stupid mistakes \u2013 try and break it \u2013 and then thanked the people who found the problems.<\/p>\n<p>Just sayin\u2019<\/p>\n<p>Sources include: <a href=\"https:\/\/www.bbc.com\/news\/articles\/cd11gzejgz4o\">The BBC<\/a><\/p>\n<p>Alarming new research from the University of Edinburgh&#8217;s Childlight initiative estimates that over 300 million children globally faced online sexual exploitation and abuse in the past year alone.<\/p>\n<p>The study provides the first global estimate of the crisis&#8217;s scale. Researchers found one in eight children, or 12.6%, were victims of non-consensual sexual images\/videos and abusive online interactions like grooming.<\/p>\n<p>Paul Stanfield, Childlight&#8217;s CEO, painted a grim picture, stating:<\/p>\n<p>&#8220;This is on a staggering scale that in the UK alone equates to forming a line of male offenders that could stretch all the way from Glasgow to London \u2013 or filling Wembley Stadium 20 times over.&#8221;<\/p>\n<p>The research highlighted the prevalence of offenders admitting they would abuse children if assured secrecy &#8211; 14 million men in the U.S., 1.8 million in the UK, and 7.5% in Australia.\u00a0 And yes, this exists in Canada.<\/p>\n<p>Grace Tame, a survivor who now runs a foundation on the issue, warned the crisis is &#8220;steadily worsening thanks to advancing technologies&#8221; enabling instantaneous creation and distribution of abuse material.<\/p>\n<p>Stephen Kavanagh of Interpol called it a &#8220;clear and present danger&#8221; requiring a unified global response, including better investigator training and data sharing.<\/p>\n<p>Scottish Minister Natalie Don stated: &#8220;Keeping children and young people safe from sexual abuse and exploitation is of the utmost importance&#8230;these are global problems which require global solutions.&#8221;<\/p>\n<p>There\u2019s no perfect solution to this, but if you haven\u2019t had that talk with your kids that tells them they can tell you anything and all you\u2019ll do is understand and help them \u2013 maybe it\u2019s time.<\/p>\n<p>Sources include:\u00a0 <a href=\"Staggering%20300%20Million%20Children%20Face%20Online%20Sexual%20Abuse%20Yearly\">The Independent<\/a><\/p>\n<p>And that\u2019s our show for today\u2026<\/p>\n<p>We love your comments. Reach me at <a href=\"mailto:editorial@technewsday.ca\">editorial@technewsday.ca<\/a>. Show notes are at technewsday.ca or .com \u2013 take your pick.<\/p>\n<p>I\u2019m your host, Jim Love, have a terrific Tuesday<\/p>\n<p>&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>London Drugs refused to pay ransom and data is leaked. Another epic AI fail from Google, Meta\u2019s chief scientist tells students that large language models aren\u2019t worth studying and more bad news on the risks of kids and online activity. All this and more on this \u201cshocking truth\u201d edition of Hashtag Trending. I\u2019m your host, [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":45249,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[360,9],"tags":[1346,198,1365],"class_list":["post-45248","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-podcasts","category-todays-news","tag-hashtag-trending","tag-podcast","tag-todays-news"],"acf":[],"_links":{"self":[{"href":"https:\/\/technewsday.com\/staging\/wp-json\/wp\/v2\/posts\/45248","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/technewsday.com\/staging\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/technewsday.com\/staging\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/technewsday.com\/staging\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/technewsday.com\/staging\/wp-json\/wp\/v2\/comments?post=45248"}],"version-history":[{"count":3,"href":"https:\/\/technewsday.com\/staging\/wp-json\/wp\/v2\/posts\/45248\/revisions"}],"predecessor-version":[{"id":45252,"href":"https:\/\/technewsday.com\/staging\/wp-json\/wp\/v2\/posts\/45248\/revisions\/45252"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/technewsday.com\/staging\/wp-json\/wp\/v2\/media\/45249"}],"wp:attachment":[{"href":"https:\/\/technewsday.com\/staging\/wp-json\/wp\/v2\/media?parent=45248"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/technewsday.com\/staging\/wp-json\/wp\/v2\/categories?post=45248"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/technewsday.com\/staging\/wp-json\/wp\/v2\/tags?post=45248"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}