[*] [-] [-] [x] [A+] [a-]  
[l] at 5/23/24 10:38am
Enlarge / Nathan Benaich of Air Street Capital delivers the opening presentation on the state of AI at EmTech Digital 2024 on May 22, 2024. (credit: Benj Edwards) CAMBRIDGE, Massachusetts—On Wednesday, AI enthusiasts and experts gathered to hear a series of presentations about the state of AI at EmTech Digital 2024 on the Massachusetts Institute of Technology's campus. The event was hosted by the publication MIT Technology Review. The overall consensus is that generative AI is still in its very early stages—with policy, regulations, and social norms still being established—and its growth is likely to continue into the future. I was there to check the event out. MIT is the birthplace of many tech innovations—including the first action-oriented computer video game—among others, so it felt fitting to hear talks about the latest tech craze in the same building that hosts MIT's Media Lab on its sprawling and lush campus. EmTech's speakers included AI researchers, policy experts, critics, and company spokespeople. A corporate feel pervaded the event due to strategic sponsorships, but it was handled in a low-key way that matches the level-headed tech coverage coming out of MIT Technology Review. After each presentation, MIT Technology Review staff—such as Editor-in-Chief Mat Honan and Senior Reporter Melissa Heikkilä—did a brief sit-down interview with the speaker, pushing back on some points and emphasizing others. Then the speaker took a few audience questions if time allowed.Read 10 remaining paragraphs | Comments

[Category: AI, Biz & IT, EmTech Digital, machine learning, Mat Honan, Melissa Heikkilä, MIT, mit media lab, MIT Technology Review]

[*] [-] [-] [x] [A+] [a-]  
[l] at 5/21/24 1:14pm
Enlarge (credit: Getty Images) Malware recently spotted in the wild uses sophisticated measures to disable antivirus protections, destroy evidence of infection, and permanently infect machines with cryptocurrency-mining software, researchers said Tuesday. Key to making the unusually complex system of malware operate is a function in the main payload, named GhostEngine, that disables Microsoft Defender or any other antivirus or endpoint-protection software that may be running on the targeted computer. It also hides any evidence of compromise. “The first objective of the GhostEngine malware is to incapacitate endpoint security solutions and disable specific Windows event logs, such as Security and System logs, which record process creation and service registration,” said researchers from Elastic Security Labs, who discovered the attacks. When it first executes, GhostEngine scans machines for any EDR, or endpoint protection and response, software that may be running. If it finds any, it loads drivers known to contain vulnerabilities that allow attackers to gain access to the kernel, the core of all operating systems that’s heavily restricted to prevent tampering. One of the vulnerable drivers is an anti-rootkit file from Avast named aswArPots.sys. GhostEngine uses it to terminate the EDR security agent. A malicious file named smartscreen.exe then uses a driver from IObit named iobitunlockers.sys to delete the security agent binary.Read 10 remaining paragraphs | Comments

[Category: Biz & IT, Security]

[*] [+] [-] [x] [A+] [a-]  
[l] at 5/20/24 3:43pm
Enlarge / A screenshot of Microsoft's new "Recall" feature in action. (credit: Microsoft) At a Build conference event on Monday, Microsoft revealed a new AI-powered feature called "Recall" for Copilot+ PCs that will allow Windows 11 users to search and retrieve their past activities on their PC. To make it work, Recall records everything users do on their PC, including activities in apps, communications in live meetings, and websites visited for research. Despite encryption and local storage, the new feature raises privacy concerns for certain Windows users. "Recall uses Copilot+ PC advanced processing capabilities to take images of your active screen every few seconds," Microsoft says on its website. "The snapshots are encrypted and saved on your PC’s hard drive. You can use Recall to locate the content you have viewed on your PC using search or on a timeline bar that allows you to scroll through your snapshots." By performing a Recall action, users can access a snapshot from a specific time period, providing context for the event or moment they are searching for. It also allows users to search through teleconference meetings they've participated in and videos watched using an AI-powered feature that transcribes and translates speech.Read 6 remaining paragraphs | Comments

[Category: Biz & IT, Tech, AI, ChatGPT, chatgtp, copilot, machine learning, microsoft, Microsoft Copilot, Microsoft Windows, openai, Windows]

[*] [+] [-] [x] [A+] [a-]  
[l] at 5/20/24 2:51pm
Enlarge (credit: Benj Edwards | Getty Images) Since the launch of its latest AI language model, GPT-4o, OpenAI has found itself on the defensive over the past week due to a string of bad news, rumors, and ridicule circulating on traditional and social media. The negative attention is potentially a sign that OpenAI has entered a new level of public visibility—and is more prominently receiving pushback to its AI approach beyond tech pundits and government regulators. OpenAI's rough week started last Monday when the company previewed a flirty AI assistant with a voice seemingly inspired by Scarlett Johansson from the 2013 film Her. OpenAI CEO Sam Altman alluded to the film himself on X just before the event, and we had previously made that comparison with an earlier voice interface for ChatGPT that launched in September 2023. While that September update included a voice called "Sky" that some have said sounds like Johansson, it was GPT-4o's seemingly lifelike new conversational interface, complete with laughing and emotionally charged tonal shifts, that led to a widely circulated Daily Show segment ridiculing the demo for its perceived flirty nature. Next, a Saturday Night Live joke reinforced an implied connection to Johansson's voice.Read 11 remaining paragraphs | Comments

[Category: AI, Biz & IT, AI assistant, ai chatbot, ChatGPT, chatgtp, GPT-4o, greg brockman, Her, Ilya Sutskever, Jan Leike, large language models, machine learning, openai, sam altman, scarlett johansson, Yann LeCun]

[*] [+] [-] [x] [A+] [a-]  
[l] at 5/17/24 1:27pm
Enlarge (credit: Brendan Smialowski / Getty Images) The Securities and Exchange Commission (SEC) will require some financial institutions to disclose security breaches within 30 days of learning about them. On Wednesday, the SEC adopted changes to Regulation S-P, which governs the treatment of the personal information of consumers. Under the amendments, institutions must notify individuals whose personal information was compromised “as soon as practicable, but not later than 30 days” after learning of unauthorized network access or use of customer data. The new requirements will be binding on broker-dealers (including funding portals), investment companies, registered investment advisers, and transfer agents. "Over the last 24 years, the nature, scale, and impact of data breaches has transformed substantially," SEC Chair Gary Gensler said. "These amendments to Regulation S-P will make critical updates to a rule first adopted in 2000 and help protect the privacy of customers’ financial data. The basic idea for covered firms is if you’ve got a breach, then you’ve got to notify. That’s good for investors."Read 9 remaining paragraphs | Comments

[Category: Biz & IT, Policy, Security]

[*] [+] [-] [x] [A+] [a-]  
[l] at 5/16/24 4:49pm
Enlarge (credit: Getty Images | the-lightwriter) An Arizona woman has been accused of helping generate millions of dollars for North Korea’s ballistic missile program by helping citizens of that country land IT jobs at US-based Fortune 500 companies. Christina Marie Chapman, 49, of Litchfield Park, Arizona, raised $6.8 million in the scheme, federal prosecutors said in an indictment unsealed Thursday. Chapman allegedly funneled the money to North Korea’s Munitions Industry Department, which is involved in key aspects of North Korea’s weapons program, including its development of ballistic missiles. Part of the alleged scheme involved Chapman and co-conspirators compromising the identities of more than 60 people living in the US and using their personal information to get North Koreans IT jobs across more than 300 US companies.Read 7 remaining paragraphs | Comments

[Category: Biz & IT, Security, balistic missiles, information technology, North Korea]

[*] [+] [-] [x] [A+] [a-]  
[l] at 5/16/24 11:44am
Enlarge (credit: The Serial Code/YouTube) It's amazing, and a little sad, to think that something created in 1989 that changed how people used and viewed the then-nascent Internet had nearly vanished by 2024. Nearly, that is, because the dogged researchers and enthusiasts at The Serial Port channel on YouTube have found what is likely the last existing copy of Archie. Archie, first crafted by Alan Emtage while a student at McGill University in Montreal, Quebec, allowed for the searching of various "anonymous" FTP servers around what was then a very small web of universities, researchers, and government and military nodes. It was groundbreaking; it was the first echo of the "anything, anywhere" Internet to come. And when The Serial Port went looking, it very much did not exist. The Serial Port's journey from wondering where the last Archie server was to hosting its own. While Archie would eventually be supplanted by Gopher, web portals, and search engines, it remains a useful way to index FTP sites and certainly should be preserved. The Serial Port did this, and the road to get there is remarkable and intriguing. You are best off watching the video of their rescue, along with its explanatory preamble. But I present here some notable bits of the tale, perhaps to tempt you into digging further.Read 4 remaining paragraphs | Comments

[Category: Biz & IT, Tech, alan emtage, anonymous ftp, archie, FTP, search engines, serial port, web search]

[*] [+] [-] [x] [A+] [a-]  
[l] at 5/15/24 4:37pm
Enlarge / The front page of BreachForums. The FBI and law enforcement partners worldwide have seized BreachForums, a website that openly trafficked malware and data stolen in hacks. The site has operated for years as an online trading post where criminals could buy and sell all kinds of compromised data, including passwords, customer records, and other often-times sensitive data. Last week, a site user advertised the sale of Dell customer data that was obtained from a support portal, forcing the computer maker to issue a vague warning to those affected. Also last week, Europol confirmed to Bleeping Computer that some of its data had been exposed in a breach of one of its portals. The data was put up for sale on BreachForums, Bleeping Computer reported. On Wednesday, the normal BreachForums front page was replaced with one that proclaimed: “This website has been taken down by the FBI and DOJ with assistance from international partners.” It went on to say agents are analyzing the backend data and invited those with information about the site to contact them. A graphic shown prominently at the top showed the forum profile images of the site's two administrators, Baphomet and ShinyHunters, positioned behind prison bars.Read 6 remaining paragraphs | Comments

[Category: Biz & IT, Security]

[*] [+] [-] [x] [A+] [a-]  
[l] at 5/15/24 2:51pm
Enlarge / Still images taken from videos generated by Google Veo. (credit: Google / Benj Edwards) On Tuesday at Google I/O 2024, Google announced Veo, a new AI video synthesis model that can create HD videos from text, image, or video prompts, similar to OpenAI's Sora. It can generate 1080p videos lasting over a minute and edit videos from written instructions, but it has not yet been released for broad use. Veo reportedly includes the ability to edit existing videos using text commands, maintain visual consistency across frames, and generate video sequences lasting up to and beyond 60 seconds from a single prompt or a series of prompts that form a narrative. The company says it can generate detailed scenes and apply cinematic effects such as time-lapses, aerial shots, and various visual styles Since the launch of DALL-E 2 in April 2022, we've seen a parade of new image synthesis and video synthesis models that aim to allow anyone who can type a written description to create a detailed image or video. While neither technology has been fully refined, both AI image and video generators have been steadily growing more capable.Read 9 remaining paragraphs | Comments

[Category: AI, Biz & IT, Google, AI filmmaking, Ai video generators, google, Google Veo, image synthesis, Imagen Video, machine learning, openai, OpenAI Sora, Sora, video synthesis]

[*] [+] [-] [x] [A+] [a-]  
[l] at 5/15/24 10:56am
Enlarge (credit: BeeBright / Getty Images / iStockphoto) Infrastructure used to maintain and distribute the Linux operating system kernel was infected for two years, starting in 2009, by sophisticated malware that managed to get a hold of one of the developers’ most closely guarded resources: the /etc/shadow files that stored encrypted password data for more than 550 system users, researchers said Tuesday. The unknown attackers behind the compromise infected at least four servers inside kernel.org, the Internet domain underpinning the sprawling Linux development and distribution network, the researchers from security firm ESET said. After obtaining the cryptographic hashes for 551 user accounts on the network, the attackers were able to convert half into plaintext passwords, likely through password-cracking techniques and the use of an advanced credential-stealing feature built into the malware. From there, the attackers used the servers to send spam and carry out other nefarious activities. The four servers were likely infected and disinfected at different times, with the last two being remediated at some point in 2011. Stealing kernel.org’s keys to the kingdom An infection of kernel.org came to light in 2011, when kernel maintainers revealed that 448 accounts had been compromised after attackers had somehow managed to gain unfettered, or “root,” system access to servers connected to the domain. Maintainers reneged on a promise to provide an autopsy of the hack, a decision that has limited the public’s understanding of the incident.Read 19 remaining paragraphs | Comments

[Category: Biz & IT, Security, Uncategorized, ebury, Linux, openssh, secure shell, SSH]

[*] [+] [-] [x] [A+] [a-]  
[l] at 5/14/24 9:05pm
Enlarge / An image Ilya Sutskever tweeted with this OpenAI resignation announcement. From left to right: New OpenAI Chief Scientist Jakub Pachocki, President Greg Brockman, Sutskever, CEO Sam Altman, and CTO Mira Murati. (credit: Ilya Sutskever / X) On Tuesday evening, OpenAI Chief Scientist Ilya Sutskever announced that he is leaving the company he co-founded, six months after he participated in the coup that temporarily ousted OpenAI CEO Sam Altman. Jan Leike, a fellow member of Sutskever's Superalignment team, is reportedly resigning with him. "After almost a decade, I have made the decision to leave OpenAI," Sutskever tweeted. "The company’s trajectory has been nothing short of miraculous, and I’m confident that OpenAI will build AGI that is both safe and beneficial under the leadership of @sama, @gdb, @miramurati and now, under the excellent research leadership of @merettm. It was an honor and a privilege to have worked together, and I will miss everyone dearly." Sutskever has been with the company since its founding in 2015 and is widely seen as one of the key engineers behind some of OpenAI's biggest technical breakthroughs. As a former OpenAI board member, he played a key role in the removal of Sam Altman as CEO in the shocking firing last November. While it later emerged that Altman's firing primarily stemmed from a power struggle with former board member Helen Toner, Sutskever sided with Toner and personally delivered the news to Altman that he was being fired on behalf of the board.Read 6 remaining paragraphs | Comments

[Category: AI, Biz & IT, Altman firing, breaking news, ChatGPT, GPT-4, greg brockman, Illya Sutskever, Jakob Pachocki, Jan Leike, large language models, Mira Murati, openai, sam altman]

[*] [+] [-] [x] [A+] [a-]  
[l] at 5/14/24 1:11pm
Enlarge / A video still of Project Astra demo at the Google I/O conference keynote in Mountain View on May 14, 2024. (credit: Google) Just one day after OpenAI revealed GPT-4o, which it bills as being able to understand what's taking place in a video feed and converse about it, Google announced Project Astra, a research prototype that features similar video comprehension capabilities. It was announced by Google DeepMind CEO Demis Hassabis on Tuesday at the Google I/O conference keynote in Mountain View, California. Hassabis called Astra "a universal agent helpful in everyday life." During a demonstration, the research model showcased its capabilities by identifying sound-producing objects, providing creative alliterations, explaining code on a monitor, and locating misplaced items. The AI assistant also exhibited its potential in wearable devices, such as smart glasses, where it could analyze diagrams, suggest improvements, and generate witty responses to visual prompts. Google says that Astra uses the camera and microphone on a user's device to provide assistance in everyday life. By continuously processing and encoding video frames and speech input, Astra creates a timeline of events and caches the information for quick recall. The company says that this enables the AI to identify objects, answer questions, and remember things it has seen that are no longer in the camera's frame.Read 14 remaining paragraphs | Comments

[Category: AI, Biz & IT, Google, Tech, AI models, ChatGPT, chatgtp, gemini, google, Google Gemini, Google I/O, Google I/O 2024, large language models, machine learning, openai, Sundar Pichai]

[*] [+] [-] [x] [A+] [a-]  
[l] at 5/14/24 8:40am
Enlarge (credit: Getty) A study analyzing Apple, Microsoft, and SpaceX suggests that return to office (RTO) mandates can lead to a higher rate of employees, especially senior-level ones, leaving the company, often to work at competitors. The study (PDF), published this month by University of Chicago and University of Michigan researchers and reported by The Washington Post on Sunday, says: In this paper, we provide causal evidence that RTO mandates at three large tech companies—Microsoft, SpaceX, and Apple—had a negative effect on the tenure and seniority of their respective workforce. In particular, we find the strongest negative effects at the top of the respective distributions, implying a more pronounced exodus of relatively senior personnel. The study looked at résumé data from People Data Labs and used "260 million résumés matched to company data." It only examined three companies, but the report's authors noted that Apple, Microsoft, and SpaceX represent 30 percent of the tech industry's revenue and over 2 percent of the technology industry's workforce. The three companies have also been influential in setting RTO standards beyond their own companies. Robert Ployhart, a professor of business administration and management at the University of South Carolina and scholar at the Academy of Management, told the Post that despite the study being limited to three companies, its conclusions are a broader reflection of the effects of RTO policies in the US.Read 8 remaining paragraphs | Comments

[Category: Biz & IT, apple, microsoft, return-to-office, spacex, work from home]

[*] [+] [-] [x] [A+] [a-]  
[l] at 5/14/24 8:03am
Enlarge (credit: Getty Images | Andriy Onufriyenko) Billy Restey is a digital artist who runs a studio in Seattle. But after hours, he hunts for rare chunks of bitcoin. He does it for the thrill. “It’s like collecting Magic: The Gathering or Pokémon cards,” says Restey. “It’s that excitement of, like, what if I catch something rare?” In the same way a dollar is made up of 100 cents, one bitcoin is composed of 100 million satoshis—or sats, for short. But not all sats are made equal. Those produced in the year bitcoin was created are considered vintage, like a fine wine. Other coveted sats were part of transactions made by bitcoin’s inventor. Some correspond with a particular transaction milestone. These and various other properties make some sats more scarce than others—and therefore more valuable. The very rarest can sell for tens of millions of times their face value; in April, a single sat, normally worth $0.0006, sold for $2.1 million. Read 19 remaining paragraphs | Comments

[Category: Biz & IT, bitcoin, cryptocurrency, sats, syndication]

[*] [+] [-] [x] [A+] [a-]  
[l] at 5/13/24 3:33pm
Enlarge (credit: Getty Images) On Monday, OpenAI employee William Fedus confirmed on X that a mysterious chart-topping AI chatbot known as "gpt-chatbot" that had been undergoing testing on LMSYS's Chatbot Arena and frustrating experts was, in fact, OpenAI's newly announced GPT-4o AI model. He also revealed that GPT-4o had topped the Chatbot Arena leaderboard, achieving the highest documented score ever. "GPT-4o is our new state-of-the-art frontier model. We’ve been testing a version on the LMSys arena as im-also-a-good-gpt2-chatbot," Fedus tweeted. Chatbot Arena is a website where visitors converse with two random AI language models side by side without knowing which model is which, then choose which model gives the best response. It's a perfect example of vibe-based AI benchmarking, as AI researcher Simon Willison calls it.Read 8 remaining paragraphs | Comments

[Category: AI, Biz & IT, AI vibes, Chatbot Arena, ChatGPT, chatgtp, GPT-4, GPT-4-turbo, GPT-4o, gpt2-chatbot, large language models, lmsys, machine learning, multimodal models, openai, Simon Willison]

[*] [+] [-] [x] [A+] [a-]  
[l] at 5/13/24 1:55pm
Enlarge (credit: Getty Images) Federal agencies, health care associations, and security researchers are warning that a ransomware group tracked under the name Black Basta is ravaging critical infrastructure sectors in attacks that have targeted more than 500 organizations in the past two years. One of the latest casualties of the native Russian-speaking group, according to CNN, is Ascension, a St. Louis-based health care system that includes 140 hospitals in 19 states. A network intrusion that struck the nonprofit last week ​​took down many of its automated processes for handling patient care, including its systems for managing electronic health records and ordering tests, procedures, and medications. In the aftermath, Ascension has diverted ambulances from some of its hospitals and relied on manual processes. “Severe operational disruptions” In an Advisory published Friday, the FBI and the Cybersecurity and Infrastructure Security Agency said Black Basta has victimized 12 of the country’s 16 critical infrastructure sectors in attacks that it has mounted on 500 organizations spanning the globe. The nonprofit health care association Health-ISAC issued its own advisory on the same day that warned that organizations it represents are especially desirable targets of the group.Read 10 remaining paragraphs | Comments

[Category: Biz & IT, Security, black basta, CISA, critical infrastructure, ransomware]

[*] [+] [-] [x] [A+] [a-]  
[l] at 5/13/24 11:58am
Enlarge (credit: Getty Images) On Monday, OpenAI debuted GPT-4o (o for "omni"), a major new AI model that can ostensibly converse using speech in real time, reading emotional cues and responding to visual input. It operates faster than OpenAI's previous best model, GPT-4 Turbo, and will be free for ChatGPT users and available as a service through API, rolling out over the next few weeks, OpenAI says. OpenAI revealed the new audio conversation and vision comprehension capabilities in a YouTube livestream titled "OpenAI Spring Update," presented by OpenAI CTO Mira Murati and employees Mark Chen and Barret Zoph that included live demos of GPT-4o in action. OpenAI claims that GPT-4o responds to audio inputs in about 320 milliseconds on average, which is similar to human response times in conversation, according to a 2009 study, and much shorter than the typical 2–3 second lag experienced with previous models. With GPT-4o, OpenAI says it trained a brand-new AI model end-to-end using text, vision, and audio in a way that all inputs and outputs "are processed by the same neural network."Read 11 remaining paragraphs | Comments

[Category: AI, Biz & IT, AI assistant, ChatGPT, chatgtp, films, GPT-4, GPT-4-turbo, GPT-4o, greg brockman, Her, large language models, machine learning, Mira Murati, movies, openai, sam altman, voice assistant]

[*] [+] [-] [x] [A+] [a-]  
[l] at 5/10/24 11:02am
Enlarge (credit: Getty Images) Google has updated its Chrome browser to patch a high-severity zero-day vulnerability that allows attackers to execute malicious code on end user devices. The fix marks the fifth time this year the company has updated the browser to protect users from an existing malicious exploit. The vulnerability, tracked as CVE-2024-4671, is a “use after free,” a class of bug that occurs in C-based programming languages. In these languages, developers must allocate memory space needed to run certain applications or operations. They do this by using “pointers” that store the memory addresses where the required data will reside. Because this space is finite, memory locations should be deallocated once the application or operation no longer needs it. Use-after-free bugs occur when the app or process fails to clear the pointer after freeing the memory location. In some cases, the pointer to the freed memory is used again and points to a new memory location storing malicious shellcode planted by an attacker’s exploit, a condition that will result in the execution of this code.Read 5 remaining paragraphs | Comments

[Category: Biz & IT, Security, Uncategorized, chrome, google, vulnerability, zero-day]

[*] [+] [-] [x] [A+] [a-]  
[l] at 5/9/24 3:20pm
Enlarge (credit: Getty Images) On Monday, Stack Overflow and OpenAI announced a new API partnership that will integrate Stack Overflow's technical content with OpenAI's ChatGPT AI assistant. However, the deal has sparked controversy among Stack Overflow's user community, with many expressing anger and protest over the use of their contributed content to support and train AI models. "I hate this. I'm just going to delete/deface my answers one by one," wrote one user on sister site Stack Exchange. "I don't care if this is against your silly policies, because as this announcement shows, your policies can change at a whim without prior consultation of your stakeholders. You don't care about your users, I don't care about you." Stack Overflow is a popular question-and-answer site for software developers that allows users to ask and answer technical questions related to coding. The site has a large community of developers who contribute knowledge and expertise to help others solve programming problems. Over the past decade, Stack Overflow has become a heavily utilized resource for many developers seeking solutions to common coding challenges.Read 6 remaining paragraphs | Comments

[Category: AI, Biz & IT, AI protest, API, ChatGPT, chatgtp, large language models, machine learning, moderation, openai, OverflowAI, OverflowAPI, sabotage, Stack Exchange, stack overflow]

[*] [+] [-] [x] [A+] [a-]  
[l] at 5/9/24 12:40pm
Enlarge (credit: Getty) For years, Dell customers have been on the receiving end of scam calls from people claiming to be part of the computer maker’s support team. The scammers call from a valid Dell phone number, know the customer's name and address, and use information that should be known only to Dell and the customer, including the service tag number, computer model, and serial number associated with a past purchase. Then the callers attempt to scam the customer into making a payment, installing questionable software, or taking some other potentially harmful action. Recently, according to numerous social media posts such as this one, Dell notified an unspecified number of customers that names, physical addresses, and hardware and order information associated with previous purchases was somehow connected to an “incident involving a Dell portal, which contains a database with limited types of customer information.” The vague wording, which Dell is declining to elaborate on, appears to confirm an April 29 post by Daily Dark Web reporting the offer to sell purported personal information of 49 million people who bought Dell gear from 2017 to 2024. Ad posted to Breach Forums, as reported by Daily Dark Web. (credit: Daily Dark Web) The customer information affected is identical in both the Dell notification and the for-sale ad, which was posted to, and later removed from, Breach Forums, an online bazaar for people looking to buy or sell stolen data. The customer information stolen, according to both Dell and the ad, included:Read 5 remaining paragraphs | Comments

[Category: Biz & IT, Security, breaches, dell, personally identifiable information, PII, scams]

[*] [+] [-] [x] [A+] [a-]  
[l] at 5/8/24 3:35pm
Enlarge (credit: Getty Images) Researchers on Wednesday reported critical vulnerabilities in a widely used networking appliance that leaves some of the world’s biggest networks open to intrusion. The vulnerabilities reside in BIG-IP Next Central Manager, a component in the latest generation of the BIG-IP line of appliances organizations use to manage traffic going into and out of their networks. Seattle-based F5, which sells the product, says its gear is used in 48 of the top 50 corporations as tracked by Fortune. F5 describes the Next Central Manager as a “single, centralized point of control” for managing entire fleets of BIG-IP appliances. As devices performing load balancing, DDoS mitigation, and inspection and encryption of data entering and exiting large networks, BIG-IP gear sits at their perimeter and acts as a major pipeline to some of the most security-critical resources housed inside. Those characteristics have made BIG-IP appliances ideal for hacking. In 2021 and 2022, hackers actively compromised BIG-IP appliances by exploiting vulnerabilities carrying severity ratings of 9.8 out of 10.Read 12 remaining paragraphs | Comments

[Category: Biz & IT, Security, big-ip, f5, network intrusion, vulnerabilities]

As of 5/23/24 11:14am. Last new 5/23/24 11:14am.

Next feed in category: Tech News World