GlobalTech.TV — Episode 9: Monthly cloud and cybersecurity news (September 2024)

GlobalTech.TV — Episode 9: Monthly cloud and cybersecurity news (September 2024)
GlobalTechTV
GlobalTech.TV — Episode 9: Monthly cloud and cybersecurity news (September 2024)

Sep 28 2024 | 00:39:07

/
Episode 9 September 28, 2024 00:39:07

Hosted By

Ariel Munafo Eyal Estrin Raz Kotler

Show Notes

A podcast about cloud adoption and cybersecurity.

Website: https://www.globaltech.tv/

 

Social networks: https://linktr.ee/globaltechtv

 

View Full Transcript

Episode Transcript

[00:00:01] Speaker A: Hello, everyone, and welcome to another episode of Global Tech TV. And we are running to the news of this month and with whom we will start. [00:00:13] Speaker B: Whatever you want. I'm exciting for every episode that I'm here and I feel always welcoming to speak first. But it doesn't matter who. [00:00:24] Speaker C: Go ahead, speak. [00:00:28] Speaker A: Send me WhatsApp. You are crazy. Why always Raz? But go again, please. Raz. [00:00:33] Speaker B: Okay, so let's start. Do you know Fortinet Ariel? [00:00:37] Speaker A: Yes. It's a security company, right? Software security company. [00:00:41] Speaker B: It's one of the biggest security companies. They came out with their firewall to compete with checkpoint back in the days, but now they're still in the game. They have many customers, especially in Southeast Asia and Asia Pacific. And guess what, they are not the only one. They've been suffered from third party data breach that affect their customers. In this case, by the way, the breach was coming from their s three bucket. So the Faultynet confirmed that the data breach evolved 440gb of file stolen from starting from their Microsoft Azure. So there we have AWS over here and also Microsoft involved from the SharePoint server, the bridge was carried by drum bass. But another threat actor with another funny name which is full, and I'm sorry, I'm sure there is no kids here because their name is like a bit like slacky, which is 40 beach. Okay. And I think they use that name via beach as what? We think it's not the ocean beach, but they want to tackle Fortinet. They stole the data, moved to s three bucket, then where the AWS was involved as well. The infrastructure, I mean. Yeah, not the company. They use the structure and then download all the information. What they are attempting to do, what they attempt to do with fault in it. They try to run some of them, meaning they attempt to extort Fortinet. And guess what? Do you think that Fortinet agreed to pay? [00:02:42] Speaker A: I say no. [00:02:43] Speaker C: Depends if they know what kind of data. [00:02:45] Speaker A: I say no. [00:02:46] Speaker C: Something that they can somehow excuse with and the customer will be kind of okay with. They won't pay. [00:02:56] Speaker B: I think it's a good educated guess, yeah. Why? Because 14 had stated that lasted 0.3% of their customer base were affected. So I think their measurements based on how they need to respond and how they need to act upon this attack, they refuse. They refuse to pay. What we have based on bleeping computer and cyber daily in Australia, source disinformation that it's still evolving. But we know we don't have a clear information what's going on right now. Okay. Conclusion one, Microsoft Azure sharepoint servers cannot defend their self. You need to make sure they are protected enough from any attack, although they are in the cloud. Number two, when you use s three buckets and AWS, threat actors can use them as well. It's not just for the good guys. [00:04:09] Speaker C: It looks like it's a very old s three bucket because in the past, I'm guessing two years or something like this, AWS made the change to default settings. So anytime you open new s three bucket by default, it's been configured private. And if you want to allow it publicly public access, you need to do, you need to take some actions so it's not a default situation. But for all the buckets, you're right. They should have scanned all the assets and know which one are public and which one are private. It's not that hard to realize it from the UI, but again, I'm guessing they have like thousands of s three buckets. [00:04:50] Speaker B: Yeah, yeah. Okay, next. [00:04:54] Speaker A: By the way, Raz, I also want to add something. Fortinet are a multi cloud compute computing company. [00:05:03] Speaker B: It's a good way to finance. I like it. Thank you. Thank you for setting the stage for that. [00:05:12] Speaker A: And a cloud computing expert. [00:05:13] Speaker B: Right, so I know you want to speak about cloud, but you gave me the stage to speak about cyber. [00:05:18] Speaker A: Yeah, you are right. [00:05:20] Speaker B: Okay, thank you, Ayad and Ariel, of course. Okay, moving on. You know that I live in Southeast Asia and there are some cases that are coming from here. And now we're going a bit far in east, which is surprise, North Korea. Lazarus group, the notorious Lazarus group, they hit again and this time Capital one. What they target, they target developers with malicious python packages. Again, we speaking about supply chain attack in packages. They pose one of the campaigns that Capital one was trying to have. Ongoing campaign by installing VM Connect, which is VMConnect. For the one of you that are not familiar, it's the Microsoft Virtual or Hyper V virtual machine connection tool, Windows management instrument for the ones that are familiar with WMI, it's using for access desktop and console environment of running virtual machines. Meaning that Capital one want to have to leverage this capability that's coming from Microsoft. However, Lazarus group have a different plan. So they have a continuous targeting this development by distributing malicious software through an open source repository. And because of that, Capital one were able to actually understand that and pose the campaign just to get into some technical details how they did it. The developers were redirect to a GitHub repository, contain a homework task designed to trick to download the malware so they kind of use their maybe LMS, GitHub or open repositories that usually use for new developers. So this is the way how the attacker trick, try to trick the developer in order to download this malicious Python package. By the way, I don't know if, you know, if you go to 80% of new developers, you're going to say which language are you most comfortable with? You know what they're going to say? [00:07:54] Speaker A: No. [00:07:54] Speaker B: Junior developers. [00:08:00] Speaker C: Depending what they learned in. [00:08:01] Speaker B: College, JavaScript, BBScript, no HTML, they're gonna say Python. So, and this makes sense why Lazarus group was targeting Python. Yeah, they go into junior developers, they're putting a package pretending to be a benigna package, but it was containing a malware and they put it inside a virtual machine, which was very interesting to design. The malware attack. This tactic, by the way, was using in the past, not using Vimwork connect or packages, they use that in LinkedIn. In the past when you actually connect to LinkedIn and you try to do some learning and you download the package and you're getting the malware. Lazarus group, by the way, is one of the most notorious groups in the world. Yeah, but we can speak about it in a different episode. I can refer some of you to listen to the shamoon attack, okay. Against Aramco, one of the biggest companies in the world. Very famous attack by Lazarus group based on foreign journalists. Okay, next, some good news. Legal law enforcements were able to arrest a UK guys I guess are a bit young because they plan guilty to run an OTP agency. We all use MFA when we are getting recognized by our face recognition or we have our password. The next step of verification, the one time password that we are getting as an SMS. This group of individuals in the age of 19 to 21, after they've been caught, they pled guilty on the uk court running what they call the company OTP agency, an online platform that provided social engineering services to obtain one time passcodes. The target was they actually sold their service over online platform including for packages for Apple Pay and other mechanism and charge for being criminals. The way they help criminals, it's by providing already a loading credential, obtaining an OTP and make it automatic by scripted calls using text to speech. Can I say AI or it's too early in the call? [00:11:05] Speaker A: No, no, it's okay. No. [00:11:07] Speaker B: Okay, so AI leverage AI again. Thank you. Bleeping computer and NSA video on X. I love this one because bad guys be careful, sometimes the law enforcement can get into your door. So let's move to another one. And I said AI because I want to have a segue. There is lots of new startups that are facing and focusing how to prevent malicious activity on prompts, on chat prompts. So guess what? One individual researcher which is declared itself as entrepreneur in red team activity is individual. He found a spyware that exploits chat GPT on Mac. On Macintosh. Yeah, Mac, apple application of OpenAI. I don't know if you had the opportunity to do that and work with chat GPT. Ariel, lately, I know you do it once in a while, though, you said you're going to be improved on that. What happened? When you're writing something, you're going to see the prompt saying memory, meaning that he's now going to remember your prompt, your question, and the next time you're going to have the same topic, you're going to have related answers to the memory. So what, the malicious or the threat actor, did they actually leverage the memory by exploiting a chat GPT memory feature to plant a persistent spawn on a user's account? [00:12:55] Speaker A: Amazing. [00:12:56] Speaker C: How do you invoke it? [00:12:58] Speaker B: This is a very good opportunity to say that there is an option when using an app not like the browser. There is a capability to clean the memory. So once in a while. [00:13:13] Speaker C: No, no. But let's assume that the attacker was able to inject malicious code into my application into my context. How does he make sure that the next time I'm going to ask a question or send a prompt to the bot, he will return to me with the malicious code? [00:13:35] Speaker B: So one of the prompt injection capabilities was redirecting the chat GPT to output the relevant malicious website or documents. Because sometimes you're going to look at that as hallucinations of the chat. So you can ask, hey, when Ariel did his first exit. So the question is, the answer is going to be Ariel did his exit five years ago. This is the website where you can see the article about that, but instead of the real link to the article is going to redirect you to a malicious site and then going to drop the file to you. It's kind of a DNS cache poisoning attack method that we all familiar with. Yeah, something like that. Yeah. So it's kind of old tactic, but leveraging a new technology or new applications make sense. [00:14:45] Speaker A: Amazing. People are so smart and they are so, yeah, yeah. By the way, I didn't use too much DPT, but my kid is using. [00:14:54] Speaker B: So maybe I hope it doesn't use it on macOS with the application, because the issue was only affected on macOS chatgpt app due to the API restriction that not being enforced. [00:15:11] Speaker C: I can tell you that from my personal experience using chat GPT, you always need to question the answer it provides you, because sometimes you ask a question and the answer looks very good and then it says okay, provide me reference link to the original document or vendor documentation or something else, and he sends to a link and you click on the link and you get an error 404 message, meaning he generates random links. That looks perfect, but I mean it doesn't do it on purpose, but that's just because it doesn't have the answer to my question. But when you try to rely on this as a reliable source, we're not there yet. [00:15:51] Speaker B: Yeah, and for that reason I'm going to give my personal experience. You need to use multiple AI capability. Chat GPT is only one of them and one of my recommendation. If you want to get a more specific article based research base getting sources, try to use perplexity AI. I'm not doing any. We don't have any sponsorship by them. But I can say that by using perplexity AI in the past two months, the results, if I want to have some sources with a good reliability that based on article research, I'm going to perplexity AI. If I want to restructure or have a better grammar of English because I'm not native, if I want to have a referral to what I'm writing already, chat GPT, especially the new model which is preview for all preview. This is the one that's going to give you a very good, let's say co pilot experience. And again, I agree with you. Do not rely on chat GPT to do your research. It's your best assistance as a reference, not more than that. I agree based on my experience and I'm using it on hourly basis just to check again and again things. [00:17:22] Speaker A: Yeah, one thing, if they want to give us some sponsorship, we are open to it, right? [00:17:29] Speaker B: Open perplexity AI. We are open to that. I love your product. I love Softbank and I think this is one of the best investments they did so far in the past ten years. So kudos to Softbank and the perfect city AI guys. The one basically their target is to replace Google and be the search engine of the future. No money guarantee for this sponsorship. Thank you. Okay, moving forward, just wrapping the OpenAI, the researcher approach. OpenAI, they already patch it, so don't worry about it. You just need to update the app in case you are using the OpenAI chat GPT on macOS. Thank you hacker News and auth Tactica for being the source. Last one before we moving to the cloud news some technology released that I think it's going to be interested to all your all the people that are dealing with threat intelligence or threat intelligence services or want to get more data. Cloud player, one of the known companies out there, they launch a free, they launch free threat intelligence services to enhance cybersecurity organizations. But I think it's a great move and contribution back to the community. We all know that Telfer is listed in the Nasdaq. I'm not saying buy their stock, but I'm saying they are big enough to do good things for back to the community. So Cloudflare launches a free threat intelligence services that include threat intelligence help organization to detect and mitigate cyber threats. The service is going to offer something that we sometimes pay for it and I suggest you to try it just to see if it's relevant. Because they say they offer real time threat data and insight for malicious based on ips. Yeah, Cloudflare very good at CDN and a very good kind of your DNS firewall or other capabilities. So they say they know how to recognize a malicious ip addresses, domain URL's and integration of capability for security operation teams. Their main objective to doing that is to aim to improve cybersecurity business for all sizes. They want to be more approachable and I think it's a good move. Even though maybe in a few months from now it's not going to be free anymore. This is the usual suspect when the company is doing that. It's never free, but maybe it's going to stay free till then. Let's try together. Done with a cybersecurity for September. Back to you. [00:20:33] Speaker A: Thank you. Thank you. [00:20:35] Speaker C: Before we move, I have to say something. Even though most of the people in the industry look at Cloudflare, as you said, like a megacDN or DNS or DDoS protection or maybe even a WAF solution, they have large portfolio that people unfortunately are not aware enough. And recently I wrote a blog post comparing several platforms dealing with function as a service like serverless, like something that theoretically can compete with AWS Lambda or azure functions. So they have their own services. I don't know how do they integrate if you're working with one of the major hyperscale cloud providers, but they have their own platform because remember, they have a huge network, worldwide network, so they can take benefit of it. So very interesting solutions by Cloudflare. Need to advance your knowledge about this company more and more? [00:21:34] Speaker A: Yeah, Cloudflare also has their own s three. It's called two reals. I think. [00:21:39] Speaker C: Definitely they have their managed database. Yes, yes, they do. [00:21:44] Speaker B: Okay. They know that. Maybe I should buy more their stock. Okay, got it. Thank you. [00:21:51] Speaker C: We do not recommend any stock. [00:21:54] Speaker B: Disclaimer. No suggestion. Recommendation. I don't have any certificate for any investors on investment. [00:22:08] Speaker A: Yes, I think we are moving to the cloud computing news. [00:22:13] Speaker C: Yeah, so today we have, I believe most of the news are like previous months, meaning in the scala between AI and cybersecurity. So in between. So beginning with AWS, I'm beginning alphabetically. So AWS released a document called Building Security from the ground up with secure by design. We all heard about this concept of secure by design. So now aws have their own document talking about this document emphasize integrating security into product development early on. It aims to minimize vulnerability and recognize security as a core business requirement among kinker situations, automations, defense, in depth AI and threat modeling, and naturally, compliance. So interesting documents. If you're doing secure development, I suggest you read this document. Moving back to AI or Genai, AWS released a new blog post called Generative AI cost optimization strategies. I believe it was published in the past 48 hours or something. So this specific document deals with optimizing cost to the AI lifecycle. So far we've only focused on cost management as it relates to infrastructure in the build process or the ongoing management of cloud environment. So now there's a document specifically talks about how to manage cost in a genai environment, the entire lifecycle from the model selection, such as who is the user, who is the task, what input types will the model needs to handle and what output types are expected, and fine tuning to data management and operation. Like customizing our foundation models with organization, unique data and context, which is always interesting to know because it's not just generic content, because we're part of an organization, we have our own data sets. We want to train our data to be relevant for our business and for our customers. So interesting document as well. [00:24:31] Speaker B: Yeah, I have to say that I read this document and one of the things that I liked with the approach was the phenot for AI, meaning that AWS said not just use our resources, we're gonna assist you how to optimize your spend to do the best training. One of the things that caught my eyes when I look at it. So good job. [00:25:00] Speaker C: Okay, moving on. [00:25:04] Speaker A: I entered to mute. I don't know why, maybe you try to shut me out, but what is the name of this document? Because I was thinking to send it to my Philips guide. So. Thank you Raz. [00:25:16] Speaker B: Sure. [00:25:17] Speaker C: So we will share everything when we publish this recording, but it's called generative AI cost optimization strategies. [00:25:26] Speaker A: Okay, thank you. [00:25:27] Speaker B: Don't worry, we're not gonna charge you. Okay. [00:25:32] Speaker C: You got this. This one for free? [00:25:34] Speaker A: Yeah. [00:25:36] Speaker C: Okay, so moving on to Azure, still in the domain of cybersecurity and genai, Azure released two security features for their Azure AI platform. One of them is called prompt shields in Azure AI content safety and Azure OpenAI service. This specific feature includes two main capabilities. One of them is prompt shields for direct attacks. Previously called j break risk detection. This shield targets direct prompt injections, attacks where users deliberately exploit system vulnerabilities to elicit unauthorized behavior from the louse language model. So this is one capability. The other one is called prompt shields for indirect attacks. This shield aims to safeguard against attack that uses information not directly supplied by the user or developer, such as external document, like pushing us to use some external material. So this is one security capability or safeguard for Azure OpenAI. The other capability is called protected material detection in Azure AI content safety. In Azure OpenAI service, it's a feature designed to guard against outputs that could potentially violate copyright. So now they are not just focusing on malicious prompt that will get, I don't know, abusive content or something like this. Now we're also talking about copyright or legal copyright. So theoretically if you get hacked and then the lawyer says okay, what have you done to protect your legal documents? You can always say, but I checked all the checkboxes that Azure allowed me, so I don't know how foolproof is this, but it's another protection mechanism for this whole genai capabilities. [00:27:42] Speaker B: Yeah, I think that this is part of the biggest big project of Microsoft that called responsible AI. And I think this is one of their streamlines, which is great to see. [00:27:55] Speaker C: I agree. Okay, so new announcement. I believe we talked about this in the past, but now it's under general availability. Microsoft entra Internet access, Internet access, secure access to all Internet and SaaS applications and resources with an identity centric secure web gateway as we used to have the notorious proxy server, it unifying identity and network access controls through a single zero trust policy engine to close security gaps and minimize the risk of cyber threats. So fully secure solution. Still we need to play with this, but I now it's no longer under preview, it's now under general availability. Moving on to Google Cloud. So GCP released a new feature called Application Rationalization Dashboard. It helps you identify which application are best suited for modernization and migration to the cloud, which is always a debate among organizations from all sizes. I have my own on premise data center. I have my application. Now I need to figure out which of my workloads are suitable to run on the cloud and which one. Maybe I should keep it on premise, or maybe re architect in the cloud. So this new capability, using a dashboard, using a nice dashboard, analyze organizations code across tens or hundreds of applications, evaluates cloud maturity and identifies any cloud blockers, remediation efforts, software composition and health and open source risk. So you also get the insight of whether or not I'm using some open source component and will I have license issues. So very interesting idea for any organization who is in the process of migrating to the cloud. [00:29:57] Speaker B: Okay. [00:29:59] Speaker A: And I think that there are a lot of companies. [00:30:02] Speaker C: Yeah, yeah. And taking your existing workload, pushing it to the cloud, I'm guessing it's something that is doable. Unless you have specific, I don't know. Unless you don't have any license issues or specific hardware limitations, something like this. But the question you always ask yourself, will my workload as is, will it be able to run in the cloud? So it's an open question if you ask me lift and shift. I don't really like this approach. It was good for, I don't know, six, seven years ago. For today, you need tools like the dashboard that Google just released. But at least it will give you the insights of what will happen if theoretically tomorrow morning I want to move my workload to the cloud. Maybe, I don't know, there's a lot of blockers. Maybe I should think about re architect it, modernize it, or do something about it other than lift and shift. So very nice. [00:31:00] Speaker A: I think that we will should do if we'd like to join it is. It will be great. But really to discuss this cloud migration journey to the cloud, I think that I have a lot to say about many things. [00:31:17] Speaker C: Sure. Count me. [00:31:19] Speaker A: Okay. [00:31:20] Speaker C: Okay. Okay. So moving on. Since we haven't spoken enough about AI ML or genai. So still on the same topic, orca security, one of the cloud protection provider who provide insights into your cloud environment, they released their 2024 State of AI Security report. This specific report reveals the top AI risk seen in the wild, which shed lights on the security risk associated with deployed AI models in cloud services. And from the report we can see several insights or highlights. Over 56% of organizations have embedded AI to build custom applications, which is kind of a lot considering the fact that we haven't dealt with Genai before. 2023, 45% of Amazon Sagemaker buckets use non randomized default names and 98% haven't disabled root access for Sagemaker notebook instances. Amazon Sagemaker is a platform allowing you to connect to datasets and run live python code, mostly python code, and you can generate your own models and get insights, you can train them and then get the output and push it to production and be able to run the same model in full production. So this in high level what Amazon Sagemaker is doing. As we can see, people are still neglecting, neglecting the best practices, use random names, don't use root accounts. So there's a lot of recommendation for using tools such as Sagemaker. And lastly, approximately 62% of organizations have deployed an AI package with at least one CVE. So not only do we suffer from multiple vulnerabilities within containers, initially within the traditional vms, now we can see it. Also when we're generating our own Ji models, when we're creating the models, we're still using the open source libraries, which is completely fine. But as in any sort of development or secure sort of development lifecycle, we need to make sure that any package, any library, any third party code that we're using, we need to scan, we need to make sure that it's not using any well known vulnerabilities. [00:34:14] Speaker B: Yeah, I have to comment on something. First of all, I agree with all the protection mechanism and the procedure to just refer el. The only thing that we need to remember is that the company that made, and just a disclaimer. Yeah, that met the report, Oca security, they also have intentions that customers will use their product. Yeah. So are some things in the background over here. One thing, I reviewed a report, by the way, I love reports. And there is lots of reports right now all around, you know, the state of one of the. [00:34:56] Speaker C: Not sending you enough emails. I can see. [00:35:00] Speaker B: It'S a good one. It's enough. Yeah, it's okay. So one of the things that stood out from this report, unfortunately I have nothing against Oka. I like them. I have lots of friends over there. They didn't mention in the reports how many companies have been surveyed. So the percentage is out of how much? 5100 1000 companies. So I think the numbers are okay. But as me, as a reader, I would like to see a reference of how many companies have been referenced, from which regions, which type of people and et cetera. But yeah, just a comment. [00:35:50] Speaker C: Good point. Okay, so last news, we're taking some shift to the open source community. The Linux foundation announced OpenSearch Software foundation to foster open collaboration in search and analytics. The Open search project is a fork of elasticsearch, which changed its license in 2021 from open source license to a commercial one. Now they're claiming they reverted back and they have some other open source license, but we need to read the defined line before we actually return to use it. With this transition to the Linux Foundation, OpenSearch, which was previously hosted by AWS, will benefit from a vendor neutral environment, deep resources and collaborative support. And I already seen articles or blog posts from people who are using both Elasticsearch and Opensearchethere. The opinions vary between the people, but I'm hearing a lot of people in the industry going forward with the open search, not just because of the open license, but I'm guessing by the way they're using their new features or things that once they fold from elasticsearch they can make more adjustment, more stability, more resiliency of the product itself. Interesting move in terms of open search and the open source community. So these are all the cloud related news for September. [00:37:36] Speaker A: Okay, okay, okay. Thank you, thank you very much. And thank you very much. Raz. I just want to update everyone that I forgot. And I came today with the yellow shirt and the look of these guys. They were like killing me. It was in the episode of the Bible. They burned you with the eyes. That was how I felt. So sorry. [00:38:02] Speaker B: And both of us have gotten through glasses. We burn it through our glasses. [00:38:07] Speaker A: Yeah, so sorry about that guys. It will not happen again. I hope so. Hopefully maybe another great episode and many people will see. And you are welcome to write us, to send us information. By the way, we are working on creating more content. We have a few surprises, a podcast in Spanish maybe, and another great person, woman that will join us. But I said enough. [00:38:43] Speaker B: Stay tuned, stay tuned and say something. We might gonna start to do TikToks. Yeah. [00:38:50] Speaker A: Okay. Okay. So yeah, let's go for it. How we say so again, thank you very much, Raj, and for sharing all the content of the knowledge that you have. And until the next one, thank you, bye.

Other Episodes