[*] [+] [-] [x] [A+] [a-]  
[l] at 6/24/24 3:00pm
Enlarge (credit: Getty Images) WordPress plugins running on as many as 36,000 websites have been backdoored in a supply-chain attack with unknown origins, security researchers said on Monday. So far, five plugins are known to be affected in the campaign, which was active as recently as Monday morning, researchers from security firm Wordfence reported. Over the past week, unknown threat actors have added malicious functions to updates available for the plugins on WordPress.org, the official site for the open source WordPress CMS software. When installed, the updates automatically create an attacker-controlled administrative account that provides full control over the compromised site. The updates also add content designed to goose search results. Poisoning the well “The injected malicious code is not very sophisticated or heavily obfuscated and contains comments throughout making it easy to follow,” the researchers wrote. “The earliest injection appears to date back to June 21st, 2024, and the threat actor was still actively making updates to plugins as recently as 5 hours ago.”Read 6 remaining paragraphs | Comments

[Category: Biz & IT, Security, backdoors, plugins, supply chain attacks, wordpress]

[*] [+] [-] [x] [A+] [a-]  
[l] at 6/24/24 12:44pm
Enlarge / Michael Jackson in concert, 1986. Sony Music owns a large portion of publishing rights to Jackson's music. (credit: Getty Images) Universal Music Group, Sony Music, and Warner Records have sued AI music-synthesis companies Udio and Suno for allegedly committing mass copyright infringement by using recordings owned by the labels to train music-generating AI models, reports Reuters. Udio and Suno can generate novel song recordings based on text-based descriptions of music (i.e., "a dubstep song about Linus Torvalds"). The lawsuits, filed in federal courts in New York and Massachusetts, claim that the AI companies' use of copyrighted material to train their systems could lead to AI-generated music that directly competes with and potentially devalues the work of human artists. Like other generative AI models, both Udio and Suno (which we covered separately in April) rely on a broad selection of existing human-created artworks that teach a neural network the relationship between words in a written prompt and styles of music. The record labels correctly note that these companies have been deliberately vague about the sources of their training data.Read 6 remaining paragraphs | Comments

[Category: AI, Biz & IT, Policy, AI lawsuit, audio synthesis, generative ai, google, machine learning, Michael Jackson, microsoft, music synthesis, openai, Reuters, sony music, Suno, Udio, Univeral Music Group, voice synthesis, Warner Records]

[*] [+] [-] [x] [A+] [a-]  
[l] at 6/20/24 3:04pm
Enlarge (credit: Anthropic / Benj Edwards) On Thursday, Anthropic announced Claude 3.5 Sonnet, its latest AI language model and the first in a new series of "3.5" models that build upon Claude 3, launched in March. Claude 3.5 can compose text, analyze data, and write code. It features a 200,000 token context window and is available now on the Claude website and through an API. Anthropic also introduced Artifacts, a new feature in the Claude interface that shows related work documents in a dedicated window. So far, people outside of Anthropic seem impressed. "This model is really, really good," wrote independent AI researcher Simon Willison on X. "I think this is the new best overall model (and both faster and half the price of Opus, similar to the GPT-4 Turbo to GPT-4o jump)." As we've written before, benchmarks for large language models (LLMs) are troublesome because they can be cherry-picked and often do not capture the feel and nuance of using a machine to generate outputs on almost any conceivable topic. But according to Anthropic, Claude 3.5 Sonnet matches or outperforms competitor models like GPT-4o and Gemini 1.5 Pro on certain benchmarks like MMLU (undergraduate level knowledge), GSM8K (grade school math), and HumanEval (coding).Read 17 remaining paragraphs | Comments

[Category: AI, Biz & IT, Anthropic, Claude, Claude 3, Claude 3.5, Claude 3.5 Sonnet, Claude Sonnet, Google Gemini, GPT-4, GPT-4o, large language models, LLMs, machine learning, meta, Mistral, openai, text synthesis]

[*] [+] [-] [x] [A+] [a-]  
[l] at 6/20/24 10:03am
Enlarge / Ford Mustang Mach E electric vehicles are offered for sale at a dealership on June 5, 2024, in Chicago, Illinois. (credit: Scott Olson / Getty Images) CDK Global touts itself as an all-in-one software-as-a-service solution that is "trusted by nearly 15,000 dealer locations." One connection, over an always-on VPN to CDK's data centers, gives a dealership customer relationship management (CRM) software, financing, inventory, and more back-office tools. That all-in-one nature explains why people trying to buy cars, and especially those trying to sell them, have had a rough couple of days. CDK's services have been down, due to what the firm describes as a "cyber incident." CDK shut down most of its systems Wednesday, June 19, then told dealerships that evening that it restored some services. CDK told dealers today, June 20, that it had "experienced an additional cyber incident late in the evening on June 19," and shut down systems again. "At this time, we do not have an estimated time frame for resolution and therefore our dealers' systems will not be available at a minimum on Thursday, June 20th," CDK told customers.Read 11 remaining paragraphs | Comments

[Category: Biz & IT, Cars, Security, car dealers, car dealerships, cars, cdk global, crm, ransomware, SaaS, venture capital, VPN]

[*] [+] [-] [x] [A+] [a-]  
[l] at 6/20/24 10:03am
Enlarge / Ford Mustang Mach E electric vehicles are offered for sale at a dealership on June 05, 2024 in Chicago, Illinois. (credit: Scott Olson / Getty Images) CDK Global touts itself as an all-in-one software-as-a-service solution that is "trusted by nearly 15,000 dealer locations." One connection, over an always-on VPN to CDK's data centers, gives a dealership customer relationship management (CRM) software, financing, inventory, and more back-office tools. That all-in-one nature explains why people trying to buy cars, and especially those trying to sell them, have had a rough couple of days. CDK's services have been down, due to what the firm describes as a "cyber incident." CDK shut down most of its systems Wednesday, June 19, then told dealerships that evening that it restored some services. CDK told dealers today, June 20, that it had "experienced an additional cyber incident late in the evening on June 19," and shut down systems again. "At this time, we do not have an estimated time frame for resolution and therefore our dealers' systems will not be available at a minimum on Thursday, June 20th," CDK's told customers.Read 8 remaining paragraphs | Comments

[Category: Biz & IT, Cars, Security, car dealers, car dealerships, cars, cdk global, crm, ransomware, SaaS, venture capital, VPN]

[*] [+] [-] [x] [A+] [a-]  
[l] at 6/20/24 10:03am
Enlarge / Ford Mustang Mach E electric vehicles are offered for sale at a dealership on June 05, 2024 in Chicago, Illinois. (credit: Scott Olson / Getty Images) CDK Global touts itself as an all-in-one software-as-a-service solution that is "trusted by nearly 15,000 dealer locations." One connection, over an always-on VPN to CDK's data centers, gives a dealership customer relationship management (CRM) software, financing, inventory, and more back-office tools. That all-in-one nature explains why people trying to buy cars, and especially those trying to sell them, have had a rough couple of days. CDK's services have been down, due to what the firm describes as a "cyber incident." CDK shut down most of its systems Wednesday, June 19, then told dealerships that evening that it restored some services. CDK told dealers today, June 20, that it had "experienced an additional cyber incident late in the evening on June 19," and shut down systems again. "At this time, we do not have an estimated time frame for resolution and therefore our dealers' systems will not be available at a minimum on Thursday, June 20th," CDK's told customers.Read 8 remaining paragraphs | Comments

[Category: Biz & IT, Cars, Security, car dealers, car dealerships, cars, cdk global, crm, ransomware, SaaS, venture capital, VPN]

[*] [+] [-] [x] [A+] [a-]  
[l] at 6/20/24 8:06am
Enlarge / Ilya Sutskever physically gestures as OpenAI CEO Sam Altman looks on at Tel Aviv University on June 5, 2023. (credit: Getty Images) On Wednesday, former OpenAI Chief Scientist Ilya Sutskever announced he is forming a new company called Safe Superintelligence, Inc. (SSI) with the goal of safely building "superintelligence," which is a hypothetical form of artificial intelligence that surpasses human intelligence, possibly in the extreme. "We will pursue safe superintelligence in a straight shot, with one focus, one goal, and one product," wrote Sutskever on X. "We will do it through revolutionary breakthroughs produced by a small cracked team." Sutskever was a founding member of OpenAI and formerly served as the company's chief scientist. Two others are joining Sutskever at SSI initially: Daniel Levy, who formerly headed the Optimization Team at OpenAI, and Daniel Gross, an AI investor who worked on machine learning projects at Apple between 2013 and 2017. The trio posted a statement on the company's new website.Read 8 remaining paragraphs | Comments

[Category: AI, Biz & IT, Illya Sutskever, machine learning, sam altman, SSI]

[*] [+] [-] [x] [A+] [a-]  
[l] at 6/18/24 3:41pm
Enlarge / Screen capture of a Runway Gen-3 Alpha video generated with the prompt "A giant humanoid, made of fluffy blue cotton candy, stomping on the ground, and roaring to the sky, clear blue sky behind them." (credit: Runway) On Sunday, Runway announced a new AI video synthesis model called Gen-3 Alpha that's still under development, but it appears to create video of similar quality to OpenAI's Sora, which debuted earlier this year (and has also not yet been released). It can generate novel, high-definition video from text prompts that range from realistic humans to surrealistic monsters stomping the countryside. Unlike Runway's previous best model from June 2023, which could only create two-second-long clips, Gen-3 Alpha can reportedly create 10-second-long video segments of people, places, and things that have a consistency and coherency that easily surpasses Gen-2. If 10 seconds sounds short compared to Sora's full minute of video, consider that the company is working with a shoestring budget of compute compared to more lavishly funded OpenAI—and actually has a history of shipping video generation capability to commercial users. Gen-3 Alpha does not generate audio to accompany the video clips, and it's highly likely that temporally coherent generations (those that keep a character consistent over time) are dependent on similar high-quality training material. But Runway's improvement in visual fidelity over the past year is difficult to ignore.Read 20 remaining paragraphs | Comments

[Category: AI, Biz & IT, AI-generated video, deepfakes, filmmaking, films, Gen-2, Gen-3, Gen-3 Alpha, giant cotton-candy monster, google, machine learning, meta, openai, Runway, Sora, Special Effects, video synthesis]

[*] [+] [-] [x] [A+] [a-]  
[l] at 6/18/24 2:30pm
Enlarge (credit: Getty Images) Two men have pleaded guilty to charges of computer intrusion and aggravated identity theft tied to their theft of records from a law enforcement database for use in doxxing and extorting multiple individuals. Sagar Steven Singh, 20, and Nicholas Ceraolo, 26, admitted to being members of ViLE, a group that specializes in obtaining personal information of individuals and using it to extort or harass them. Members use various methods to collect social security numbers, cell phone numbers, and other personal data and post it, or threaten to post it, to a website administered by the group. Victims had to pay to have their information removed or kept off the website. Singh pled guilty on Monday, June 17, and Ceraolo pled guilty on May 30. Impersonating a police officer The men gained access to the law enforcement portal by stealing the password of an officer’s account and using it to log in. The portal, maintained by an unnamed US federal law enforcement agency, was restricted to members of various law enforcement agencies to share intelligence from government databases with state and local officials. The site provided access to detailed nonpublic records involving narcotics and currency seizures and to law enforcement intelligence reports.Read 5 remaining paragraphs | Comments

[Category: Biz & IT, Security, doxxing, extortion, Identity theft]

[*] [+] [-] [x] [A+] [a-]  
[l] at 6/18/24 11:09am
Enlarge (credit: Getty Images / Benj Edwards) Japanese telecommunications giant SoftBank recently announced that it has been developing "emotion-canceling" technology powered by AI that will alter the voices of angry customers to sound calmer during phone calls with customer service representatives. The project aims to reduce the psychological burden on operators suffering from harassment and has been in development for three years. Softbank plans to launch it by March 2026, but the idea is receiving mixed reactions online. According to a report from the Japanese news site The Asahi Shimbun, SoftBank's project relies on an AI model to alter the tone and pitch of a customer's voice in real-time during a phone call. SoftBank's developers, led by employee Toshiyuki Nakatani, trained the system using a dataset of over 10,000 voice samples, which were performed by 10 Japanese actors expressing more than 100 phrases with various emotions, including yelling and accusatory tones. Voice cloning and synthesis technology has made massive strides in the past three years. We've previously covered technology from Microsoft that can clone a voice with a three-second audio sample and audio-processing technology from Adobe that cleans up audio by re-synthesizing a person's voice, so SoftBank's technology is well within the realm of plausibility.Read 11 remaining paragraphs | Comments

[Category: AI, Biz & IT, call center, customer service, emotion cancelling, machine learning, SoftBank, The Asahi Shimbun, Toshiyuki Nakatani]

[*] [+] [-] [x] [A+] [a-]  
[l] at 6/18/24 11:09am
Enlarge (credit: Getty Images / Benj Edwards) Japanese telecommunications giant SoftBank recently announced that it has been developing "emotion-canceling" technology powered by AI that will alter the voices of angry customers to sound calmer during phone calls with customer service representatives. The project aims to reduce the psychological burden on operators suffering from harassment and has been in development for three years. Softbank plans to launch it by March 2026, but the idea is receiving mixed reactions online. According to a report from the Japanese news site The Asahi Shimbun, SoftBank's project relies on an AI model to alter the tone and pitch of a customer's voice in real-time during a phone call. SoftBank's developers, led by employee Toshiyuki Nakatani, trained the system using a dataset of over 10,000 voice samples, which were performed by 10 Japanese actors expressing more than 100 phrases with various emotions, including yelling and accusatory tones. Voice cloning and synthesis technology has made massive strides in the past three years. We've previously covered technology from Microsoft that can clone a voice with a three-second audio sample and audio-processing technology from Adobe that cleans up audio by re-synthesizing a person's voice, so SoftBank's technology is well within the realm of plausibility.Read 11 remaining paragraphs | Comments

[Category: AI, Biz & IT, call center, customer service, emotion cancelling, machine learning, SoftBank, The Asahi Shimbun, Toshiyuki Nakatani]

[*] [+] [-] [x] [A+] [a-]  
[l] at 6/17/24 12:39pm
Enlarge (credit: Getty Images) Hardware manufacturer Asus has released updates patching multiple critical vulnerabilities that allow hackers to remotely take control of a range of router models with no authentication or interaction required of end users. The most critical vulnerability, tracked as CVE-2024-3080 is an authentication bypass flaw that can allow remote attackers to log into a device without authentication. The vulnerability, according to the Taiwan Computer Emergency Response Team / Coordination Center (TWCERT/CC), carries a severity rating of 9.8 out of 10. Asus said the vulnerability affects the following routers: Model name Support Site link XT8 and XT8_V2 https://www.asus.com/uk/supportonly/asus%20zenwifi%20ax%20(xt8)/helpdesk_bios/ RT-AX88U https://www.asus.com/supportonly/RT-AX88U/helpdesk_bios/ RT-AX58U https://www.asus.com/supportonly/RT-AX58U/helpdesk_bios/ RT-AX57 https://www.asus.com/networking-iot-servers/wifi-routers/asus-wifi-routers/rt-ax57/helpdesk_bios RT-AC86U https://www.asus.com/supportonly/RT-AC86U/helpdesk_bios/ RT-AC68U https://www.asus.com/supportonly/RT-AC68U/helpdesk_bios/ A favorite haven for hackers A second vulnerability tracked as CVE-2024-3079 affects the same router models. It stems from a buffer overflow flaw and allows remote hackers who have already obtained administrative access to an affected router to execute commands.Read 5 remaining paragraphs | Comments

[Category: Biz & IT, Security, ASUS, routers, vulnerabilities]

[*] [+] [-] [x] [A+] [a-]  
[l] at 6/17/24 10:40am
Enlarge (credit: Getty Images) Proton, the secure-minded email and productivity suite, is becoming a nonprofit foundation, but it doesn't want you to think about it in the way you think about other notable privacy and web foundations. "We believe that if we want to bring about large-scale change, Proton can’t be billionaire-subsidized (like Signal), Google-subsidized (like Mozilla), government-subsidized (like Tor), donation-subsidized (like Wikipedia), or even speculation-subsidized (like the plethora of crypto “foundations”)," Proton CEO Andy Yen wrote in a blog post announcing the transition. "Instead, Proton must have a profitable and healthy business at its core." The announcement comes exactly 10 years to the day after a crowdfunding campaign saw 10,000 people give more than $500,000 to launch Proton Mail. To make it happen, Yen, along with co-founder Jason Stockman and first employee Dingchao Lu, endowed the Proton Foundation with some of their shares. The Proton Foundation is now the primary shareholder of the business Proton, which Yen states will "make irrevocable our wish that Proton remains in perpetuity an organization that places people ahead of profits." Among other members of the Foundation's board is Sir Tim Berners-Lee, inventor of HTML, HTTP, and almost everything else about the web.Read 4 remaining paragraphs | Comments

[Category: Biz & IT, Tech, andy yen, encryption, privacy, proton, proton foundation, proton mail, Tim Berners-Lee]

[*] [+] [-] [x] [A+] [a-]  
[l] at 6/14/24 1:40pm
Enlarge (credit: Getty Images) Ransomware criminals have quickly weaponized an easy-to-exploit vulnerability in the PHP programming language that executes malicious code on web servers, security researchers said. As of Thursday, Internet scans performed by security firm Censys had detected 1,000 servers infected by a ransomware strain known as TellYouThePass, down from 1,800 detected on Monday. The servers, primarily located in China, no longer display their usual content; instead, many list the site’s file directory, which shows all files have been given a .locked extension, indicating they have been encrypted. An accompanying ransom note demands roughly $6,500 in exchange for the decryption key. The output of PHP servers infected by TellYouThePass ransomware. (credit: Censys) The accompanying ransom note. (credit: Censys) When opportunity knocks The vulnerability, tracked as CVE-2024-4577 and carrying a severity rating of 9.8 out of 10, stems from errors in the way PHP converts Unicode characters into ASCII. A feature built into Windows known as Best Fit allows attackers to use a technique known as argument injection to convert user-supplied input into characters that pass malicious commands to the main PHP application. Exploits allow attackers to bypass CVE-2012-1823, a critical code execution vulnerability patched in PHP in 2012.Read 11 remaining paragraphs | Comments

[Category: Biz & IT, Security, exploits, php, ransomware, vulnerability]

[*] [+] [-] [x] [A+] [a-]  
[l] at 6/14/24 12:04pm
Enlarge / Illustration of the Apollo lunar lander Eagle over the Moon. (credit: Getty Images) On Friday, a retired software engineer named Martin C. Martin announced that he recently discovered a bug in the original Lunar Lander computer game's physics code while tinkering with the software. Created by a 17-year-old high school student named Jim Storer in 1969, this primordial game rendered the action only as text status updates on a teletype, but it set the stage for future versions to come. The legendary game—which Storer developed on a PDP-8 minicomputer in a programming language called FOCAL just months after Neil Armstrong and Buzz Aldrin made their historic moonwalks—allows players to control a lunar module's descent onto the Moon's surface. Players must carefully manage their fuel usage to achieve a gentle landing, making critical decisions every ten seconds to burn the right amount of fuel. In 2009, just short of the 40th anniversary of the first Moon landing, I set out to find the author of the original Lunar Lander game, which was then primarily known as a graphical game, thanks to the graphical version from 1974 and a 1979 Atari arcade title. When I discovered that Storer created the oldest known version as a teletype game, I interviewed him and wrote up a history of the game. Storer later released the source code to the original game, written in FOCAL, on his website.Read 7 remaining paragraphs | Comments

[Category: Biz & IT, Gaming, Space, Tech, apollo, FOCAL, gaming, Jim Storer, lunar lander, Martin C. Martin, NASA, PDP-8, space]

[*] [+] [-] [x] [A+] [a-]  
[l] at 6/13/24 2:51pm
Enlarge (credit: Getty Images) Last month, Wells Fargo terminated over a dozen bank employees following an investigation into claims of faking work activity on their computers, according to a Bloomberg report. A Financial Industry Regulatory Authority (FINRA) search conducted by Ars confirmed that the fired members of the firm's wealth and investment management division were "discharged after review of allegations involving simulation of keyboard activity creating impression of active work." A screenshot of a FINRA report showing that an employee was "Discharged after review of allegations involving simulation of keyboard activity creating impression of active work." (credit: Jon Brodkin / Ars Technica) A rise in remote work during the COVID-19 pandemic accelerated the adoption of remote worker surveillance techniques, especially those using software installed on machines that keeps track of activity and reports back to corporate management. It's worth noting that the Bloomberg report says the FINRA filing does not specify whether the fired Wells Fargo employees were simulating activity at home or in an office.Read 6 remaining paragraphs | Comments

[Category: Biz & IT, Tech, corporate surveillance, faking work, keyboard, Mouse, mouse jigglers, surveillance, tech, Wells Fargo]

[*] [+] [-] [x] [A+] [a-]  
[l] at 6/13/24 11:20am
Enlarge (credit: OpenAI / Apple / Benj Edwards) On Monday, Apple announced it would be integrating OpenAI's ChatGPT AI assistant into upcoming versions of its iPhone, iPad, and Mac operating systems. It paves the way for future third-party AI model integrations, but given Google's multi-billion-dollar deal with Apple for preferential web search, the OpenAI announcement inspired speculation about who is paying whom. According to a Bloomberg report published Wednesday, Apple considers ChatGPT's placement on its devices as compensation enough. "Apple isn’t paying OpenAI as part of the partnership," writes Bloomberg reporter Mark Gurman, citing people familiar with the matter who wish to remain anonymous. "Instead, Apple believes pushing OpenAI’s brand and technology to hundreds of millions of its devices is of equal or greater value than monetary payments." The Bloomberg report states that neither company expects the agreement to generate meaningful revenue in the short term, and in fact, the partnership could burn extra money for OpenAI, because it pays Microsoft to host ChatGPT's capabilities on its Azure cloud. However, OpenAI could benefit by converting free users to paid subscriptions, and Apple potentially benefits by providing easy, built-in access to ChatGPT during a time when its own in-house LLMs are still catching up.Read 7 remaining paragraphs | Comments

[Category: AI, Apple, Biz & IT, AI assitants, apple, apple intelligence, ChatGPT, chatgtp, GPT-4, iOS, ios 18, iPadOS, iPadOS 18, iphone, large language models, machine learning, MacOS, macos sequoia, openai, sam altman]

[*] [+] [-] [x] [A+] [a-]  
[l] at 6/12/24 2:52pm
Enlarge / A photo illustration of what a shirt-button camera could look like. (credit: Aurich Lawson | Getty Images) On Saturday, Turkish police arrested and detained a prospective university student who is accused of developing an elaborate scheme to use AI and hidden devices to help him cheat on an important entrance exam, reports Reuters and The Daily Mail. The unnamed student is reportedly jailed pending trial after the incident, which took place in the southwestern province of Isparta, where the student was caught behaving suspiciously during the TYT. The TYT is a nationally held university aptitude exam that determines a person's eligibility to attend a university in Turkey—and cheating on the high-stakes exam is a serious offense. According to police reports, the student used a camera disguised as a shirt button, connected to AI software via a "router" (possibly a mistranslation of a cellular modem) hidden in the sole of their shoe. The system worked by scanning the exam questions using the button camera, which then relayed the information to an unnamed AI model. The software generated the correct answers and recited them to the student through an earpiece.Read 5 remaining paragraphs | Comments

[Category: AI, Biz & IT, machine learning, Reuters, The Daily Mail, Turkey, TYT]

[*] [+] [-] [x] [A+] [a-]  
[l] at 6/12/24 1:26pm
Enlarge / An AI-generated image created using Stable Diffusion 3 of a girl lying in the grass. (credit: HorneyMetalBeing) On Wednesday, Stability AI released weights for Stable Diffusion 3 Medium, an AI image-synthesis model that turns text prompts into AI-generated images. Its arrival has been ridiculed online, however, because it generates images of humans in a way that seems like a step backward from other state-of-the-art image-synthesis models like Midjourney or DALL-E 3. As a result, it can churn out wild anatomically incorrect visual abominations with ease. A thread on Reddit, titled, "Is this release supposed to be a joke? [SD3-2B]," details the spectacular failures of SD3 Medium at rendering humans, especially human limbs like hands and feet. Another thread, titled, "Why is SD3 so bad at generating girls lying on the grass?" shows similar issues, but for entire human bodies. Hands have traditionally been a challenge for AI image generators due to lack of good examples in early training data sets, but more recently, several image-synthesis models seemed to have overcome the issue. In that sense, SD3 appears to be a huge step backward for the image-synthesis enthusiasts that gather on Reddit—especially compared to recent Stability releases like SD XL Turbo in November.Read 10 remaining paragraphs | Comments

[Category: AI, Biz & IT, AI image generator, body horror, image synthesis, machine learning, Stability AI, Stable Diffusion, Stable Diffusion 3]

[*] [+] [-] [x] [A+] [a-]  
[l] at 6/12/24 11:57am
Enlarge (credit: Getty Images) One of the major data brokers engaged in the deeply alienating practice of selling detailed driver behavior data to insurers has shut down that business. Verisk, which had collected data from cars made by General Motors, Honda, and Hyundai, has stopped receiving that data, according to The Record, a news site run by security firm Recorded Future. According to a statement provided to Privacy4Cars, and reported by The Record, Verisk will no longer provide a "Driving Behavior Data History Report" to insurers. Skeptics have long assumed that car companies had at least some plan to monetize the rich data regularly sent from cars back to their manufacturers, or telematics. But a concrete example of this was reported by The New York Times' Kashmir Hill, in which drivers of GM vehicles were finding insurance more expensive, or impossible to acquire, because of the kinds of reports sent along the chain from GM to data brokers to insurers. Those who requested their collected data from the brokers found details of every trip they took: times, distances, and every "hard acceleration" or "hard braking event," among other data points.Read 4 remaining paragraphs | Comments

[Category: Biz & IT, Cars, General Motors, GM, honda, hyundai, lexisnexis, lexisnexis risk solutions, telematics, verisk]

[*] [+] [-] [x] [A+] [a-]  
[l] at 6/11/24 4:56pm
Enlarge Hackers working for the Chinese government gained access to more than 20,000 VPN appliances sold by Fortinet using a critical vulnerability that the company failed to disclose for two weeks after fixing it, Netherlands government officials said. The vulnerability, tracked as CVE-2022-42475, is a heap-based buffer overflow that allows hackers to remotely execute malicious code. It carries a severity rating of 9.8 out of 10. A maker of network security software, Fortinet silently fixed the vulnerability on November 28, 2022, but failed to mention the threat until December 12 of that year, when the company said it became aware of an “instance where this vulnerability was exploited in the wild.” On January 11, 2023—more than six weeks after the vulnerability was fixed—Fortinet warned a threat actor was exploiting it to infect government and government-related organizations with advanced custom-made malware. Enter CoatHanger The Netherlands officials first reported in February that Chinese state hackers had exploited CVE-2022-42475 to install an advanced and stealthy backdoor tracked as CoatHanger on Fortigate appliances inside the Dutch Ministry of Defence. Once installed, the never-before-seen malware, specifically designed for the underlying FortiOS operating system, was able to permanently reside on devices even when rebooted or receiving a firmware update. CoatHanger could also escape traditional detection measures, the officials warned. The damage resulting from the breach was limited, however, because infections were contained inside a segment reserved for non-classified uses.Read 6 remaining paragraphs | Comments

[Category: Biz & IT, Security, exploits, fortigate, Fortinet, vpns, vulnerabilities]

[*] [+] [-] [x] [A+] [a-]  
[l] at 6/11/24 11:29am
Enlarge / He isn't using an iPhone, but some people talk to Siri like this. On Monday, Apple premiered "Apple Intelligence" during a wide-ranging presentation at its annual Worldwide Developers Conference in Cupertino, California. However, the heart of its new tech, an array of Apple-developed AI models, was overshadowed by the announcement of ChatGPT integration into its device operating systems. Since rumors of the partnership first emerged, we've seen confusion on social media about why Apple didn't develop a cutting-edge GPT-4-like chatbot internally. Despite Apple's year-long development of its own large language models (LLMs), many perceived the integration of ChatGPT (and opening the door for others, like Google Gemini) as a sign of Apple's lack of innovation. "This is really strange. Surely Apple could train a very good competing LLM if they wanted? They've had a year," wrote AI developer Benjamin De Kraker on X. Elon Musk has also been grumbling about the OpenAI deal—and spreading misconceptions about it—saying things like, "It’s patently absurd that Apple isn’t smart enough to make their own AI, yet is somehow capable of ensuring that OpenAI will protect your security & privacy!"Read 19 remaining paragraphs | Comments

[Category: AI, Apple, Biz & IT, apple, apple intelligence, chatgtp, Elon Musk, google, GPT-4, iphone, large language models, machine learning, openai, Siri, Tim Cook, WWDC, WWDC 2024]

[*] [+] [-] [x] [A+] [a-]  
[l] at 6/10/24 4:08pm
Enlarge (credit: Getty Images) As many as 165 customers of cloud storage provider Snowflake have been compromised by a group that obtained login credentials through information-stealing malware, researchers said Monday. On Friday, Lending Tree subsidiary QuoteWizard confirmed it was among the customers notified by Snowflake that it was affected in the incident. Lending Tree spokesperson Megan Greuling said the company is in the process of determining whether data stored on Snowflake has been stolen. “That investigation is ongoing,” she wrote in an email. “As of this time, it does not appear that consumer financial account information was impacted, nor information of the parent entity, Lending Tree.”Read 13 remaining paragraphs | Comments

[Category: Biz & IT, Security, Data breaches, multi factor authentication, snowflake]

[*] [+] [-] [x] [A+] [a-]  
[l] at 6/10/24 1:15pm
Enlarge (credit: Apple) On Monday, Apple debuted "Apple Intelligence," a new suite of free AI-powered features for iOS 18, iPadOS 18, macOS Sequoia that includes creating email summaries, generating images and emoji, and allowing Siri to take actions on your behalf. These features are achieved through a combination of on-device and cloud processing, with a strong emphasis on privacy. Apple says that Apple Intelligence features will be widely available later this year and will be available as a beta test for developers this summer. The announcements came during a livestream WWDC keynote and a simultaneous event attended by the press on Apple's campus in Cupertino, California. In an introduction, Apple CEO Tim Cook said the company has been using machine learning for years, but the introduction of large language models (LLMs) presents new opportunities to elevate the capabilities of Apple products. He emphasized the need for both personalization and privacy in Apple's approach. At last year's WWDC, Apple avoided using the term "AI" completely, instead preferring terms like "machine learning" as Apple's way of avoiding buzzy hype while integrating applications of AI into apps in useful ways. This year, Apple figured out a new way to largely avoid the abbreviation "AI" by coining "Apple Intelligence," a catchall branding term that refers to a broad group of machine learning, LLM, and image generation technologies. By our count, the term "AI" was used sparingly in the keynote—most notably near the end of the presentation when Apple executive Craig Federighi said, "It's AI for the rest of us."Read 10 remaining paragraphs | Comments

[Category: AI, Apple, Biz & IT, apple, apple intelligence, apple watch, ChatGPT, chatgtp, iOS, ipad, iPadOS, iphone, large language models, machine learning, openai, sam altman]

[*] [+] [-] [x] [A+] [a-]  
[l] at 6/7/24 3:57pm
Enlarge A critical vulnerability in the PHP programming language can be trivially exploited to execute malicious code on Windows devices, security researchers warned as they urged those affected to take action before the weekend starts. Within 24 hours of the vulnerability and accompanying patch being published, researchers from the nonprofit security organization Shadowserver reported Internet scans designed to identify servers that are susceptible to attacks. That—combined with (1) the ease of exploitation, (2) the availability of proof-of-concept attack code, (3) the severity of remotely executing code on vulnerable machines, and (4) the widely used XAMPP platform being vulnerable by default—has prompted security practitioners to urge admins check to see if their PHP servers are affected before starting the weekend. When “Best Fit” isn't “A nasty bug with a very simple exploit—perfect for a Friday afternoon,” researchers with security firm WatchTowr wrote.Read 16 remaining paragraphs | Comments

[Category: Biz & IT, Security, php, remote code execution, vulnerabilities]

[*] [+] [-] [x] [A+] [a-]  
[l] at 6/7/24 8:24am
Enlarge (credit: Getty) After acquiring VMware, Broadcom swiftly enacted widespread changes that resulted in strong public backlash. A new survey of 300 director-level IT workers at companies that are customers of North American VMware provides insight into the customer reaction to Broadcom's overhaul. The survey released Thursday doesn't provide feedback from every VMware customer, but it's the first time we've seen responses from IT decision-makers working for companies paying for VMware products. It echos concerns expressed at the announcement of some of Broadcom's more controversial changes to VMware, like the end of perpetual licenses and growing costs. CloudBolt Software commissioned Wakefield Research, a market research agency, to run the study from May 9 through May 23. The "CloudBolt Industry Insights Reality Report: VMware Acquisition Aftermath" includes responses from workers at 150 companies with fewer than 1,000 workers and 150 companies with more than 1,000 workers. Survey respondents were invited via email and took the survey online, with the report authors writing that results are subject to sampling variation of ±5.7 percentage points at a 95 percent confidence level.Read 13 remaining paragraphs | Comments

[Category: Biz & IT, Amazon, Amazon Web Services, Broadcom, vmware]

[*] [+] [-] [x] [A+] [a-]  
[l] at 6/6/24 1:13pm
Enlarge (credit: Getty Images) The FBI is urging victims of one of the most prolific ransomware groups to come forward after agents recovered thousands of decryption keys that may allow the recovery of data that has remained inaccessible for months or years. The revelation, made Wednesday by a top FBI official, comes three months after an international roster of law enforcement agencies seized servers and other infrastructure used by LockBit, a ransomware syndicate that authorities say has extorted more than $1 billion from 7,000 victims around the world. Authorities said at the time that they took control of 1,000 decryption keys, 4,000 accounts, and 34 servers and froze 200 cryptocurrency accounts associated with the operation. At a speech before a cybersecurity conference in Boston, FBI Cyber Assistant Director Bryan Vorndran said Wednesday that agents have also recovered an asset that will be of intense interest to thousands of LockBit victims—the decryption keys that could allow them to unlock data that’s been held for ransom by LockBit associates.Read 8 remaining paragraphs | Comments

[Category: Biz & IT, Security, encryption, lockbit, ransomware]

[*] [+] [-] [x] [A+] [a-]  
[l] at 6/6/24 10:39am
Enlarge (credit: DuckDuckGo) On Thursday, DuckDuckGo unveiled a new "AI Chat" service that allows users to converse with four mid-range large language models (LLMs) from OpenAI, Anthropic, Meta, and Mistral in an interface similar to ChatGPT while attempting to preserve privacy and anonymity. While the AI models involved can output inaccurate information readily, the site allows users to test different mid-range LLMs without having to install anything or sign up for an account. DuckDuckGo's AI Chat currently features access to OpenAI's GPT-3.5 Turbo, Anthropic's Claude 3 Haiku, and two open source models, Meta's Llama 3 and Mistral's Mixtral 8x7B. The service is currently free to use within daily limits. Users can access AI Chat through the DuckDuckGo search engine, direct links to the site, or by using "!ai" or "!chat" shortcuts in the search field. AI Chat can also be disabled in the site's settings for users with accounts. According to DuckDuckGo, chats on the service are anonymized, with metadata and IP address removed to prevent tracing back to individuals. The company states that chats are not used for AI model training, citing its privacy policy and terms of use.Read 6 remaining paragraphs | Comments

[Category: AI, Biz & IT, AI confabulation, ai hallucinations, Anonymous, Anthropic, Anthropic Claude, ChatGPT, chatgtp, Claude, Claude 3 Haiku, confabulation, duckduckgo, Haiku, hallucinations, large language models, Llama 3, machine learning, meta, Mistral, Mixtral 8x7B, openai]

[*] [+] [-] [x] [A+] [a-]  
[l] at 6/5/24 3:41pm
Enlarge / A visual from the fake documentary Olympics Has Fallen produced by Russia-affiliated influence actor Storm-1679. (credit: Microsoft) Last year, a feature-length documentary purportedly produced by Netflix began circulating on Telegram. Titled “Olympics have Fallen” and narrated by a voice with a striking similarity to that of actor Tom Cruise, it sharply criticized the leadership of the International Olympic Committee. The slickly produced film, claiming five-star reviews from The New York Times, Washington Post, and BBC, was quickly amplified on social media. Among those seemingly endorsing the documentary were celebrities on the platform Cameo. A recently published report by Microsoft (PDF) said the film was not a documentary, had received no such reviews, and that the narrator's voice was an AI-produced deep fake of Cruise. It also said the endorsements on Cameo were faked. The Microsoft Threat Intelligence Report went on to say that the fraudulent documentary and endorsements were only one of many elaborate hoaxes created by agents of the Russian government in a yearlong influence operation intended to discredit the International Olympic Committee (IOC) and deter participation and attendance at the Paris Olympics starting next month. Other examples of the Kremlin’s ongoing influence operation include:Read 7 remaining paragraphs | Comments

[Category: Biz & IT, Security, disinformation, influence operations, Olympics, russia]

[*] [+] [-] [x] [A+] [a-]  
[l] at 6/4/24 3:52pm
Enlarge (credit: Getty Images) On Tuesday, a group of former OpenAI and Google DeepMind employees published an open letter calling for AI companies to commit to principles allowing employees to raise concerns about AI risks without fear of retaliation. The letter, titled "A Right to Warn about Advanced Artificial Intelligence," has so far been signed by 13 individuals, including some who chose to remain anonymous due to concerns about potential repercussions. The signatories argue that while AI has the potential to deliver benefits to humanity, it also poses serious risks that include "further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction." They also assert that AI companies possess substantial non-public information about their systems' capabilities, limitations, and risk levels, but currently have only weak obligations to share this information with governments and none with civil society.Read 8 remaining paragraphs | Comments

[Category: AI, Biz & IT, google, machine learning, NDA, openai, Retaliation]

[*] [+] [-] [x] [A+] [a-]  
[l] at 6/4/24 3:16pm
Enlarge A ransomware attack that crippled a London-based medical testing and diagnostics provider has led several major hospitals in the city to declare a critical incident emergency and cancel non-emergency surgeries and pathology appointments, it was widely reported Tuesday. The attack was detected Monday against Synnovis, a supplier of blood tests, swabs, bowel tests, and other hospital services in six London boroughs. The company said it has "affected all Synnovis IT systems, resulting in interruptions to many of our pathology services." The company gave no estimate of when its systems would be restored and provided no details about the attack or who was behind it. Major impact The outage has led hospitals, including Guy's and St Thomas' and King's College Hospital Trusts, to cancel operations and procedures involving blood transfusions. The cancellations include transplant surgeries, which require blood transfusions.Read 7 remaining paragraphs | Comments

[Category: Biz & IT, Security, hospitals, ransomware]

[*] [+] [-] [x] [A+] [a-]  
[l] at 6/4/24 1:23pm
Enlarge (credit: Getty Images) Zoom CEO Eric Yuan has a vision for the future of work: sending your AI-powered digital twin to attend meetings on your behalf. In an interview with The Verge's Nilay Patel published Monday, Yuan shared his plans for Zoom to become an "AI-first company," using AI to automate tasks and reduce the need for human involvement in day-to-day work. "Let’s say the team is waiting for the CEO to make a decision or maybe some meaningful conversation, my digital twin really can represent me and also can be part of the decision making process," Yuan said in the interview. "We’re not there yet, but that’s a reason why there’s limitations in today’s LLMs." LLMs are large language models—text-predicting AI models that power AI assistants like ChatGPT and Microsoft Copilot. They can output very convincing human-like text based on probabilities, but they are far from being able to replicate human reasoning. Still, Yuan suggests that instead of relying on a generic LLM to impersonate you, in the future, people will train custom LLMs to simulate each person.Read 16 remaining paragraphs | Comments

[Category: AI, Biz & IT, AI skepticism, ChatGPT, chatgtp, Eric Yuan, large language models, machine learning, Nilay Patel, Simon Willison, Ted Underwood, teleconferencing, the verge, Video calls, zoom]

[*] [+] [-] [x] [A+] [a-]  
[l] at 6/3/24 4:23pm
Enlarge (credit: Getty Images) Cloud storage provider Snowflake said that accounts belonging to multiple customers have been hacked after threat actors obtained credentials through info-stealing malware or by purchasing them on online crime forums. Ticketmaster parent Live Nation—which disclosed Friday that hackers gained access to data it stored through an unnamed third-party provider—told TechCrunch the provider was Snowflake. The live-event ticket broker said it identified the hack on May 20, and a week later, a “criminal threat actor offered what it alleged to be Company user data for sale via the dark web.” Ticketmaster is one of six Snowflake customers to be hit in the hacking campaign, said independent security researcher Kevin Beaumont, citing conversations with people inside the affected companies. Australia’s Signal Directorate said Saturday it knew of “successful compromises of several companies utilizing Snowflake environments.” Researchers with security firm Hudson Rock said in a now-deleted post that Santander, Spain’s biggest bank, was also hacked in the campaign. The researchers cited online text conversations with the threat actor. Last month, Santander disclosed a data breach affecting customers in Chile, Spain, and Uruguay.Read 11 remaining paragraphs | Comments

[Category: Biz & IT, Security, Data breaches, infostealers, snowflake]

[*] [+] [-] [x] [A+] [a-]  
[l] at 5/31/24 3:56pm
Enlarge (credit: Getty Images) On Wednesday, Axios broke the news that OpenAI had signed deals with The Atlantic and Vox Media that will allow the ChatGPT maker to license their editorial content to further train its language models. But some of the publications' writers—and the unions that represent them—were surprised by the announcements and aren't happy about it. Already, two unions have released statements expressing "alarm" and "concern." "The unionized members of The Atlantic Editorial and Business and Technology units are deeply troubled by the opaque agreement The Atlantic has made with OpenAI," reads a statement from the Atlantic union. "And especially by management's complete lack of transparency about what the agreement entails and how it will affect our work." The Vox Union—which represents The Verge, SB Nation, and Vulture, among other publications—reacted in similar fashion, writing in a statement, "Today, members of the Vox Media Union ... were informed without warning that Vox Media entered into a 'strategic content and product partnership' with OpenAI. As both journalists and workers, we have serious concerns about this partnership, which we believe could adversely impact members of our union, not to mention the well-documented ethical and environmental concerns surrounding the use of generative AI."Read 9 remaining paragraphs | Comments

[Category: AI, Biz & IT, machine learning, openai, the atlantic, Vox Media]

[*] [+] [-] [x] [A+] [a-]  
[l] at 5/30/24 8:00am
Enlarge (credit: Getty Images) One day last October, subscribers to an ISP known as Windstream began flooding message boards with reports their routers had suddenly stopped working and remained unresponsive to reboots and all other attempts to revive them. “The routers now just sit there with a steady red light on the front,” one user wrote, referring to the ActionTec T3200 router models Windstream provided to both them and a next door neighbor. “They won't even respond to a RESET.” In the messages—which appeared over a few days beginning on October 25—many Windstream users blamed the ISP for the mass bricking. They said it was the result of the company pushing updates that poisoned the devices. Windstream’s Kinetic broadband service has about 1.6 million subscribers in 18 states, including Iowa, Alabama, Arkansas, Georgia, and Kentucky. For many customers, Kinetic provides an essential link to the outside world.Read 17 remaining paragraphs | Comments

[Category: Biz & IT, Security, Uncategorized, ISPs, malware, routers, wipers]

As of 6/25/24 11:56am. Last new 6/24/24 4:45pm.

Next feed in category: Tech News World