[*] [+] [-] [x] [A+] [a-]  
[l] at 7/26/24 11:46am
Enlarge / A bad update to CrowdStrike's Falcon security software crashed millions of Windows PCs last week. (credit: CrowdStrike) CrowdStrike CEO George Kurtz said Thursday that 97 percent of all Windows systems running its Falcon sensor software were back online, a week after an update-related outage to the corporate security software delayed flights and took down emergency response systems, among many other disruptions. The update, which caused Windows PCs to throw the dreaded Blue Screen of Death and reboot, affected about 8.5 million systems by Microsoft's count, leaving roughly 250,000 that still need to be brought back online. Microsoft VP John Cable said in a blog post that the company has "engaged over 5,000 support engineers working 24x7" to help clean up the mess created by CrowdStrike's update and hinted at Windows changes that could help—if they don't run afoul of regulators, anyway. "This incident shows clearly that Windows must prioritize change and innovation in the area of end-to-end resilience," wrote Cable. "These improvements must go hand in hand with ongoing improvements in security and be in close cooperation with our many partners, who also care deeply about the security of the Windows ecosystem."Read 7 remaining paragraphs | Comments

[Category: Biz & IT, Security, BSOD, Crowdstrike, windows 10, windows 11]

[*] [+] [-] [x] [A+] [a-]  
[l] at 7/26/24 10:24am
Enlarge / Police observe the Eiffel Tower from Trocadero ahead of the Paris 2024 Olympic Games on July 22, 2024. (credit: Hector Vivas/Getty Images) On the eve of the Olympics opening ceremony, Paris is a city swamped in security. Forty thousand barriers divide the French capital. Packs of police officers wearing stab vests patrol pretty, cobbled streets. The river Seine is out of bounds to anyone who has not already been vetted and issued a personal QR code. Khaki-clad soldiers, present since the 2015 terrorist attacks, linger near a canal-side boulangerie, wearing berets and clutching large guns to their chests. French interior minister Gérald Darmanin has spent the past week justifying these measures as vigilance—not overkill. France is facing the “biggest security challenge any country has ever had to organize in a time of peace,” he told reporters on Tuesday. In an interview with weekly newspaper Le Journal du Dimanche, he explained that “potentially dangerous individuals” have been caught applying to work or volunteer at the Olympics, including 257 radical Islamists, 181 members of the far left, and 95 from the far right. Yesterday, he told French news broadcaster BFM that a Russian citizen had been arrested on suspicion of plotting “large scale” acts of “destabilization” during the Games. Parisians are still grumbling about road closures and bike lanes that abruptly end without warning, while human rights groups are denouncing “unacceptable risks to fundamental rights.” For the Games, this is nothing new. Complaints about dystopian security are almost an Olympics tradition. Previous iterations have been characterized as Lockdown London, Fortress Tokyo, and the “arms race” in Rio. This time, it is the least-visible security measures that have emerged as some of the most controversial. Security measures in Paris have been turbocharged by a new type of AI, as the city enables controversial algorithms to crawl CCTV footage of transport stations looking for threats. The system was first tested in Paris back in March at two Depeche Mode concerts.Read 12 remaining paragraphs | Comments

[Category: AI, Biz & IT, Security, 2024 olympics, AI surveillance, syndication]

[*] [+] [-] [x] [A+] [a-]  
[l] at 7/26/24 6:57am
Enlarge / Infrastructure! Howdy, Arsians! Last year, we partnered with IBM to host an in-person event in the Houston area where we all gathered together, had some cocktails, and talked about resiliency and the future of IT. Location always matters for things like this, and so we hosted it at Space Center Houston and had our cocktails amidst cool space artifacts. In addition to learning a bunch of neat stuff, it was awesome to hang out with all the amazing folks who turned up at the event. Much fun was had! This year, we're back partnering with IBM again and we're looking to repeat that success with not one, but two in-person gatherings—each featuring a series of panel discussions with experts and capping off with a happy hour for hanging out and mingling. Where last time we went central, this time we're going to the coasts—both east and west. Read on for details! September: San Jose, California Our first event will be in San Jose on September 18, and it's titled "Beyond the Buzz: An Infrastructure Future with GenAI and What Comes Next." The idea will be to explore what generative AI means for the future of data management. The topics we'll be discussing include:Read 6 remaining paragraphs | Comments

[Category: Biz & IT, AI, dc, ibm, Infrastructure, san jose, Storage, Washington D.C.]

[*] [+] [-] [x] [A+] [a-]  
[l] at 7/25/24 3:54pm
Enlarge / An illustration provided by Google. (credit: Google) On Thursday, Google DeepMind announced that AI systems called AlphaProof and AlphaGeometry 2 reportedly solved four out of six problems from this year's International Mathematical Olympiad (IMO), achieving a score equivalent to a silver medal. The tech giant claims this marks the first time an AI has reached this level of performance in the prestigious math competition—but as usual in AI, the claims aren't as clear-cut as they seem. Google says AlphaProof uses reinforcement learning to prove mathematical statements in the formal language called Lean. The system trains itself by generating and verifying millions of proofs, progressively tackling more difficult problems. Meanwhile, AlphaGeometry 2 is described as an upgraded version of Google's previous geometry-solving AI modeI, now powered by a Gemini-based language model trained on significantly more data. According to Google, prominent mathematicians Sir Timothy Gowers and Dr. Joseph Myers scored the AI model's solutions using official IMO rules. The company reports its combined system earned 28 out of 42 possible points, just shy of the 29-point gold medal threshold. This included a perfect score on the competition's hardest problem, which Google claims only five human contestants solved this year.Read 9 remaining paragraphs | Comments

[Category: AI, Biz & IT, Google, AlphaGeometry, AlphaGeometry 2, AlphaProof, google, google deepmind, machine learning]

[*] [+] [-] [x] [A+] [a-]  
[l] at 7/25/24 2:39pm
Enlarge (credit: Benj Edwards / OpenAI) Arguably, few companies have unintentionally contributed more to the increase of AI-generated noise online than OpenAI. Despite its best intentions—and against its terms of service—its AI language models are often used to compose spam, and its pioneering research has inspired others to build AI models that can potentially do the same. This influx of AI-generated content has further reduced the effectiveness of SEO-driven search engines like Google. In 2024, web search is in a sorry state indeed. It's interesting, then, that OpenAI is now offering a potential solution to that problem. On Thursday, OpenAI revealed a prototype AI-powered search engine called SearchGPT that aims to provide users with quick, accurate answers sourced from the web. It's also a direct challenge to Google, which also has tried to apply generative AI to web search (but with little success). The company says it plans to integrate the most useful aspects of the temporary prototype into ChatGPT in the future. ChatGPT (and Microsoft Copilot) can already perform web searches using Bing, but SearchGPT seems to be OpenAI's purpose-built interface for AI-assisted web searching.Read 12 remaining paragraphs | Comments

[Category: AI, Biz & IT, ChatGPT, chatgtp, gpt, GPT-4, machine learning, openai, SearchGPT]

[*] [+] [-] [x] [A+] [a-]  
[l] at 7/25/24 2:12pm
(credit: Chrome) Google is redesigning Chrome malware detections to include password-protected executable files that users can upload for deep scanning, a change the browser maker says will allow it to detect more malicious threats. Google has long allowed users to switch on the Enhanced Mode of its Safe Browsing, a Chrome feature that warns users when they’re downloading a file that’s believed to be unsafe, either because of suspicious characteristics or because it’s in a list of known malware. With Enhanced Mode turned on, Google will prompt users to upload suspicious files that aren’t allowed or blocked by its detection engine. Under the new changes, Google will prompt these users to provide any password needed to open the file. Beware of password-protected archives In a post published Wednesday, Jasika Bawa, Lily Chen, and Daniel Rubery of the Chrome Security team wrote:Read 6 remaining paragraphs | Comments

[Category: Biz & IT, Google, Security, chrome, google, passwords, safe browsing]

[*] [+] [-] [x] [A+] [a-]  
[l] at 7/25/24 12:00pm
Enlarge (credit: sasha85ru | Getty Imates) In 2012, an industry-wide coalition of hardware and software makers adopted Secure Boot to protect against a long-looming security threat. The threat was the specter of malware that could infect the BIOS, the firmware that loaded the operating system each time a computer booted up. From there, it could remain immune to detection and removal and could load even before the OS and security apps did. The threat of such BIOS-dwelling malware was largely theoretical and fueled in large part by the creation of ICLord Bioskit by a Chinese researcher in 2007. ICLord was a rootkit, a class of malware that gains and maintains stealthy root access by subverting key protections built into the operating system. The proof of concept demonstrated that such BIOS rootkits weren't only feasible; they were also powerful. In 2011, the threat became a reality with the discovery of Mebromi, the first-known BIOS rootkit to be used in the wild. Keenly aware of Mebromi and its potential for a devastating new class of attack, the Secure Boot architects hashed out a complex new way to shore up security in the pre-boot environment. Built into UEFI—the Unified Extensible Firmware Interface that would become the successor to BIOS—Secure Boot used public-key cryptography to block the loading of any code that wasn’t signed with a pre-approved digital signature. To this day, key players in security—among them Microsoft and the US National Security Agency—regard Secure Boot as an important, if not essential, foundation of trust in securing devices in some of the most critical environments, including in industrial control and enterprise networks.Read 36 remaining paragraphs | Comments

[Category: Biz & IT, Features, Security, cryptography, key compromise, rootkits, secure boot, supply chain, uefi, unified extensible firmware interface]

[*] [+] [-] [x] [A+] [a-]  
[l] at 7/24/24 4:12pm
Enlarge In June, Runway debuted a new text-to-video synthesis model called Gen-3 Alpha. It converts written descriptions called "prompts" into HD video clips without sound. We've since had a chance to use it and wanted to share our results. Our tests show that careful prompting isn't as important as matching concepts likely found in the training data, and that achieving amusing results likely requires many generations and selective cherry-picking. An enduring theme of all generative AI models we've seen since 2022 is that they can be excellent at mixing concepts found in training data but are typically very poor at generalizing (applying learned "knowledge" to new situations the model has not explicitly been trained on). That means they can excel at stylistic and thematic novelty but struggle at fundamental structural novelty that goes beyond the training data. What does all that mean? In the case of Runway Gen-3, lack of generalization means you might ask for a sailing ship in a swirling cup of coffee, and provided that Gen-3's training data includes video examples of sailing ships and swirling coffee, that's an "easy" novel combination for the model to make fairly convincingly. But if you ask for a cat drinking a can of beer (in a beer commercial), it will generally fail because there aren't likely many videos of photorealistic cats drinking human beverages in the training data. Instead, the model will pull from what it has learned about videos of cats and videos of beer commercials and combine them. The result is a cat with human hands pounding back a brewsky.Read 27 remaining paragraphs | Comments

[Category: AI, Biz & IT, Ai video generator, AI-generated video, barbarians, deepfakes, Gen-3, Gen-3 Alpha, image synthesis, machine learning, Runway, video synthesis, will smith]

[*] [+] [-] [x] [A+] [a-]  
[l] at 7/24/24 11:33am
Enlarge / CrowdStrike's Falcon security software brought down as many as 8.5 million Windows PCs over the weekend. (credit: CrowdStrike) Security firm CrowdStrike has posted a preliminary post-incident report about the botched update to its Falcon security software that caused as many as 8.5 million Windows PCs to crash over the weekend, delaying flights, disrupting emergency response systems, and generally wreaking havoc. The detailed post explains exactly what happened: At just after midnight Eastern time, CrowdStrike deployed "a content configuration update" to allow its software to "gather telemetry on possible novel threat techniques." CrowdStrike says that these Rapid Response Content updates are tested before being deployed, and one of the steps involves checking updates using something called the Content Validator. In this case, "a bug in the Content Validator" failed to detect "problematic content data" in the update responsible for the crashing systems. CrowdStrike says it is making changes to its testing and deployment processes to prevent something like this from happening again. The company is specifically including "additional validation checks to the Content Validator" and adding more layers of testing to its process.Read 4 remaining paragraphs | Comments

[Category: Biz & IT, Tech, BSOD, Crowdstrike, windows 10, windows 11]

[*] [+] [-] [x] [A+] [a-]  
[l] at 7/24/24 10:19am
Enlarge / Elon Musk, chief executive officer of Tesla Inc., during a fireside discussion on artificial intelligence risks with Rishi Sunak, UK prime minister, in London, UK, on Thursday, Nov. 2, 2023. (credit: Getty Images) On Monday, Elon Musk announced the start of training for what he calls "the world's most powerful AI training cluster" at xAI's new supercomputer facility in Memphis, Tennessee. The billionaire entrepreneur and CEO of multiple tech companies took to X (formerly Twitter) to share that the so-called "Memphis Supercluster" began operations at approximately 4:20 am local time that day. Musk's xAI team, in collaboration with X and Nvidia, launched the supercomputer cluster featuring 100,000 liquid-cooled H100 GPUs on a single RDMA fabric. This setup, according to Musk, gives xAI "a significant advantage in training the world's most powerful AI by every metric by December this year." Given issues with xAI's Grok chatbot throughout the year, skeptics would be justified in questioning whether those claims will match reality, especially given Musk's tendency for grandiose, off-the-cuff remarks on the social media platform he runs.Read 7 remaining paragraphs | Comments

[Category: AI, Biz & IT, ChatGPT, chatgtp, Elon Musk, GPUs, grok, H100, large language models, Memphis, Memphis supercluster, NVIDIA, tennessee, Tesla, TVA, WREG, xAI]

[*] [+] [-] [x] [A+] [a-]  
[l] at 7/24/24 5:15am
Enlarge / The cityscape from the tower of the Lviv Town Hall in winter. (credit: Anastasiia Smolienko / Ukrinform/Future Publishing via Getty Images) As Russia has tested every form of attack on Ukraine's civilians over the past decade, both digital and physical, it's often used winter as one of its weapons—launching cyberattacks on electric utilities to trigger December blackouts and ruthlessly bombing heating infrastructure. Now it appears Russia-based hackers last January tried yet another approach to leave Ukrainians in the cold: a specimen of malicious software that, for the first time, allowed hackers to reach directly into a Ukrainian heating utility, switching off heat and hot water to hundreds of buildings in the midst of a winter freeze. Industrial cybersecurity firm Dragos on Tuesday revealed a newly discovered sample of Russia-linked malware that it believes was used in a cyberattack in late January to target a heating utility in Lviv, Ukraine, disabling service to 600 buildings for around 48 hours. The attack, in which the malware altered temperature readings to trick control systems into cooling the hot water running through buildings' pipes, marks the first confirmed case in which hackers have directly sabotaged a heating utility. Dragos' report on the malware notes that the attack occurred at a moment when Lviv was experiencing its typical January freeze, close to the coldest time of the year in the region, and that “the civilian population had to endure sub-zero [Celsius] temperatures.” As Dragos analyst Kyle O'Meara puts it more bluntly: “It's a shitty thing for someone to turn off your heat in the middle of winter.”Read 12 remaining paragraphs | Comments

[Category: Biz & IT, Security, Russian war on Ukraine, syndication, Ukraine invasion, Ukraine war]

[*] [+] [-] [x] [A+] [a-]  
[l] at 7/23/24 2:01pm
Enlarge (credit: Benj Edwards / Getty Images) In the AI world, there's a buzz in the air about a new AI language model released Tuesday by Meta: Llama 3.1 405B. The reason? It's potentially the first time anyone can download a GPT-4-class large language model (LLM) for free and run it on their own hardware. You'll still need some beefy hardware: Meta says it can run on a "single server node," which isn't desktop PC-grade equipment. But it's a provocative shot across the bow of "closed" AI model vendors such as OpenAI and Anthropic. "Llama 3.1 405B is the first openly available model that rivals the top AI models when it comes to state-of-the-art capabilities in general knowledge, steerability, math, tool use, and multilingual translation," says Meta. Company CEO Mark Zuckerberg calls 405B "the first frontier-level open source AI model." In the AI industry, "frontier model" is a term for an AI system designed to push the boundaries of current capabilities. In this case, Meta is positioning 405B among the likes of the industry's top AI models, such as OpenAI's GPT-4o, Claude's 3.5 Sonnet, and Google Gemini 1.5 Pro.Read 14 remaining paragraphs | Comments

[Category: AI, Biz & IT, Anthropic, Anthropic Claude, chatbots, ChatGPT, chatgtp, Claude 3, GPT-4, GPT-4o, large language models, LLaMA, Llama 3, Llama 3.1, Llama 3.1 405B, machine learning, meta, openai]

[*] [+] [-] [x] [A+] [a-]  
[l] at 7/22/24 11:07am
Enlarge / A bad update to CrowdStrike's Falcon security software crashed millions of Windows PCs last week. (credit: CrowdStrike) By Monday morning, many of the major disruptions from the flawed CrowdStrike security update late last week had cleared up. Flight delays and cancellations were no longer front-page news, and multiple Starbucks locations near me are taking orders through the app once again. But the cleanup effort continues. Microsoft estimates that around 8.5 million Windows systems were affected by the issue, which involved a buggy .sys file that was automatically pushed to Windows PCs running the CrowdStrike Falcon security software. Once downloaded, that update caused Windows systems to display the dreaded Blue Screen of Death and enter a boot loop. "While software updates may occasionally cause disturbances, significant incidents like the CrowdStrike event are infrequent," wrote Microsoft VP of Enterprise and OS Security David Weston in a blog post. "We currently estimate that CrowdStrike’s update affected 8.5 million Windows devices, or less than one percent of all Windows machines. While the percentage was small, the broad economic and societal impacts reflect the use of CrowdStrike by enterprises that run many critical services."Read 7 remaining paragraphs | Comments

[Category: Biz & IT, Tech, Crowdstrike, microsoft]

[*] [+] [-] [x] [A+] [a-]  
[l] at 7/22/24 10:19am
Enlarge / Researchers write, "In this image, the person on the left (Scarlett Johansson) is real, while the person on the right is AI-generated. Their eyeballs are depicted underneath their faces. The reflections in the eyeballs are consistent for the real person, but incorrect (from a physics point of view) for the fake person." (credit: Adejumoke Owolabi) In 2024, it's almost trivial to create realistic AI-generated images of people, which has led to fears about how these deceptive images might be detected. Researchers at the University of Hull recently unveiled a novel method for detecting AI-generated deepfake images by analyzing reflections in human eyes. The technique, presented at the Royal Astronomical Society's National Astronomy Meeting last week, adapts tools used by astronomers to study galaxies for scrutinizing the consistency of light reflections in eyeballs. Adejumoke Owolabi, an MSc student at the University of Hull, headed the research under the guidance of Dr. Kevin Pimbblet, professor of astrophysics. Their detection technique is based on a simple principle: A pair of eyes being illuminated by the same set of light sources will typically have a similarly shaped set of light reflections in each eyeball. Many AI-generated images created to date don't take eyeball reflections into account, so the simulated light reflections are often inconsistent between each eye.Read 8 remaining paragraphs | Comments

[Category: AI, Biz & IT, Adejumoke Owolabi, AI-generated images, deepfakes, Hull, image synthesis, Kevin Pimbblet, machine learning, Royal Astronomical Society, uk, united kingdom]

[*] [+] [-] [x] [A+] [a-]  
[l] at 7/19/24 9:43am
Enlarge (credit: hdaniel) Airlines, payment processors, 911 call centers, TV networks, and other businesses have been scrambling this morning after a buggy update to CrowdStrike's Falcon security software caused Windows-based systems to crash with a dreaded blue screen of death (BSOD) error message. We're updating our story about the outage with new details as we have them. Microsoft and CrowdStrike both say that "the affected update has been pulled," so what's most important for IT admins in the short term is getting their systems back up and running again. According to guidance from Microsoft, fixes range from annoying but easy to incredibly time-consuming and complex, depending on the number of systems you have to fix and the way your systems are configured. Microsoft's Azure status page outlines several fixes. The first and easiest is simply to try to reboot affected machines over and over, which gives affected machines multiple chances to try to grab CrowdStrike's non-broken update before the bad driver can cause the BSOD. Microsoft says that some of its customers have had to reboot their systems as many as 15 times to pull down the update.Read 8 remaining paragraphs | Comments

[Category: Biz & IT, Tech, azure, BSOD, Crowdstrike, microsoft, Microsoft Azure]

[*] [+] [-] [x] [A+] [a-]  
[l] at 7/19/24 7:22am
Enlarge / A passenger sits on the floor as long queues form at the check-in counters at Ninoy Aquino International Airport, on July 19, 2024 in Manila, Philippines. (credit: Ezra Acayan/Getty Images) Millions of people outside the IT industry are learning what CrowdStrike is today, and that's a real bad thing. Meanwhile, Microsoft is also catching blame for global network outages, and between the two, it's unclear as of Friday morning just who caused what. After cybersecurity firm CrowdStrike shipped an update to its Falcon Sensor software that protects mission-critical systems, blue screens of death (BSODs) started taking down Windows-based systems. The problems started in Australia and followed the dateline from there. TV networks, 911 call centers, and even the Paris Olympics were affected. Banks and financial systems in India, South Africa, Thailand, and other countries fell as computers suddenly crashed. Some individual workers discovered that their work-issued laptops were booting to blue screens on Friday morning. The outages took down not only Starbucks mobile ordering, but also a single motel in Laramie, Wyoming.Read 13 remaining paragraphs | Comments

[Category: Biz & IT, Security, airlines, Crowdstrike, cybersecurity, falcon sensor, global outage, microsoft, Microsoft Azure]

[*] [+] [-] [x] [A+] [a-]  
[l] at 7/18/24 11:34am
Enlarge (credit: Getty Images) You have to read the headline on Nvidia's latest GPU announcement slowly, parsing each clause as it arrives. "Nvidia transitions fully" sounds like real commitment, a burn-the-boats call. "Towards open-source GPU," yes, evoking the company's "first step" announcement a little over two years ago, so this must be progress, right? But, back up a word here, then finish: "GPU kernel modules." So, Nvidia has "achieved equivalent or better application performance with our open-source GPU kernel modules," and added some new capabilities to them. And now most of Nvidia's modern GPUs will default to using open source GPU kernel modules, starting with driver release R560, with dual GPL and MIT licensing. But Nvidia has moved most of its proprietary functions into a proprietary, closed-source firmware blob. The parts of Nvidia's GPUs that interact with the broader Linux system are open, but the user-space drivers and firmware are none of your or the OSS community's business.Read 4 remaining paragraphs | Comments

[Category: Biz & IT, Tech, CUDA, Linux, Linux kernel, NVIDIA, nvidia drivers, open source]

[*] [+] [-] [x] [A+] [a-]  
[l] at 7/18/24 9:44am
Enlarge (credit: Benj Edwards) On Thursday, OpenAI announced the launch of GPT-4o mini, a new, smaller version of its latest GPT-4o AI language model that will replace GPT-3.5 Turbo in ChatGPT, reports CNBC and Bloomberg. It will be available today for free users and those with ChatGPT Plus or Team subscriptions and will come to ChatGPT Enterprise next week. GPT-4o mini will reportedly be multimodal like its big brother (which launched in May), with image inputs currently enabled in the API. OpenAI says that in the future, GPT-4o mini will be able to interpret images, text, and audio, and also will be able to generate images. GPT-4o mini supports 128K tokens of input context and a knowledge cutoff of October 2023. It's also very inexpensive as an API product, costing 60 percent less than GPT-3.5 Turbo at 15 cents per million input tokens and 60 cents per million output tokens. Tokens are fragments of data that AI language models use to process information.Read 10 remaining paragraphs | Comments

[Category: AI, Biz & IT, chatbots, ChatGPT, chatgtp, gpt-3.5-turbo, GPT-4, GPT-4o, GPT-4o Mini, large language models, machine learning, openai]

[*] [+] [-] [x] [A+] [a-]  
[l] at 7/17/24 1:47pm
Enlarge Cisco on Wednesday disclosed a maximum-security vulnerability that allows remote threat actors with no authentication to change the password of any user, including those of administrators with accounts, on Cisco Smart Software Manager On-Prem devices. The Cisco Smart Software Manager On-Prem resides inside the customer premises and provides a dashboard for managing licenses for all Cisco gear in use. It’s used by customers who can’t or don’t want to manage licenses in the cloud, as is more common. In a bulletin, Cisco warns that the product contains a vulnerability that allows hackers to change any account's password. The severity of the vulnerability, tracked as CVE-2024-20419, is rated 10, the maximum score.Read 4 remaining paragraphs | Comments

[Category: Biz & IT, Security, Cisco, passwords, vulnerabilities]

[*] [+] [-] [x] [A+] [a-]  
[l] at 7/17/24 8:44am
Enlarge / Former US President Donald Trump during a campaign event at Trump National Doral Golf Club in Miami, Florida, US, on Tuesday, July 9, 2024. (credit: Getty Images) Allies of former President Donald Trump have reportedly drafted a sweeping AI executive order that aims to boost military technology and reduce regulations on AI development, The Washington Post reported. The plan, which includes a section titled "Make America First in AI," signals a dramatic potential shift in AI policy if Trump returns to the White House in 2025. The draft order, obtained by the Post, outlines a series of "Manhattan Projects" to advance military AI capabilities. It calls for an immediate review of what it terms "unnecessary and burdensome regulations" on AI development. The approach marks a contrast to the Biden administration's executive order from last October, which imposed new safety testing requirements on advanced AI systems. The proposed order suggests creating "industry-led" agencies to evaluate AI models and safeguard systems from foreign threats. This approach would likely benefit tech companies already collaborating with the Pentagon on AI projects, such as Palantir, Anduril, and Scale AI. Executives from these firms have reportedly expressed support for Trump.Read 7 remaining paragraphs | Comments

[Category: AI, Biz & IT, 2024 presidential election, AI regulation, Donald Trump, GOP, J.D. Vance, JD Vance, Joe Biden, machine learning, President Biden, Republican party, us president, White House]

[*] [+] [-] [x] [A+] [a-]  
[l] at 7/16/24 4:09pm
Enlarge / Rite Aid logo displayed at one of its stores. (credit: Getty Images) Rite Aid, the third biggest US drug store chain, said that more than 2.2 million of its customers have been swept into a data breach that stole personal information, including driver's license numbers, addresses, and dates of birth. The company said in mandatory filings with the attorneys general of states including Maine, Massachusetts, Vermont, and Oregon that the stolen data was associated with purchases or attempted purchases of retail products made between June 6, 2017, and July 30, 2018. The data provided included the purchaser's name, address, date of birth, and driver's license number or other form of government-issued ID. No Social Security numbers, financial information, or patient information were included. “On June 6, 2024, an unknown third party impersonated a company employee to compromise their business credentials and gain access to certain business systems,” the filing stated. “We detected the incident within 12 hours and immediately launched an internal investigation to terminate the unauthorized access, remediate affected systems and ascertain if any customer data was impacted.”Read 3 remaining paragraphs | Comments

[Category: Biz & IT, Security, Data breaches, personal information, rite aid]

As of 7/26/24 10:27pm. Last new 7/26/24 2:06pm.

Next feed in category: Tech News World