Monday 30 August 2021

Innovate securely with Azure

Innovate securely with Azure, Azure Exam Prep, Azure Tutorial and Material, Azure Preparation, Azure Career, Azure Learning, Azure Guides

Security is based on the inherent need for safety. Today, we see that need challenged more than ever. In the past year alone, we’ve witnessed an exponential increase in ransomware, supply chain attacks, phishing, and identity theft. These activities fundamentally threaten our human desire for security in situations such as ransomware attacks on hospitals or supply chain attacks on industrial environments.

This security challenge meets us on a variety of fronts. While attacks like Nobelium demonstrate the level of sophistication of organized nation-state actors, most attacks exploit far simpler vulnerabilities, that are often publicly documented with patches already available. Why is security so much more challenging today?

For defenders, the surface area to protect has never been larger. Security teams are often chronically understaffed. On top of that, they’re overwhelmed by the volume of security signals—much of which is noise—and often spending valuable resources on non-security work, like maintaining infrastructure.

Something needs to change. Security needs to be a foundational principle, permeating every phase of the development process, from the cloud platform itself to the DevOps lifecycle, to the security operations processes.

Azure: The only cloud platform built by a security vendor

Facing these challenges requires you to embed security into every layer of architecture. As the only major cloud platform built by a security vendor, Azure empowers you to do that. Microsoft has deep security expertise, serving 400,000 customers including 90 out of the Fortune 100, and achieving $10 billion in security revenue as a result. Microsoft’s security ecosystem includes products that are leaders in a total of five Gartner Magic Quadrants and seven Forrester Waves, plus:

◉ Microsoft employs more than 3,700 security experts and spends more than $1 billion on security every single year.

◉ The volume of security signals that Microsoft analyzes is staggering—more than 8 trillion signals every 24 hours.

◉ In 2020 alone Microsoft 365 Defender blocked 6 billion malware threats.

In a nutshell, we get security. It’s this extensive experience that informs our approach to Azure security—security that is built-in, modern, and holistic.

Security is not a destination, but a continuous journey. Well-funded nation-state attackers will always continue to innovate. That’s why it’s so important to choose a cloud vendor who is constantly monitoring for security threats, constantly raising the bar on the security of the platform, and constantly assessing best practices.

Built-in: Security integrated into the DevOps lifecycle

Protecting your cloud innovation requires security to be built into every stage of the lifecycle and every level of architecture. If it isn’t, then developers struggle to integrate security into the DevOps cycle and security analytics may be required to slow down innovation or assets go unprotected. That’s why Azure has security built-in at every layer of architecture—not only at the runtime level but also when writing code.

GitHub Advanced Security features, for example, empower developers to deliver more secure code with built-in security capabilities like code scanning for vulnerabilities, secret scanning to avoid putting secrets like keys and passwords in code repositories. Another key area of focus is dependency reviews so that developers can update vulnerable open-source dependencies before merging code. This is important because 94 percent of projects use open source code in some form (GitHub Octoverse 2020 report).

Easily discover and turn on security tools with controls that are built directly into the Azure platform. Controls built into resources like the virtual machine, SQL, storage, and container blades puts security within easy reach for users beyond security professionals. Tools like Azure Defender help security operations (SecOps) to work at scale and enable protection and monitoring for all cloud resources. Azure also offers is broad policy support, automation, and actionable best practices.

Zero trust principles enforce security at every level of the organization. Azure is built on top of these key principles: verify explicitly and assume breach. Azure has a consistent Azure Resource Manager (ARM) layer to manage resources. This layer combines with our identity capabilities to deliver multi-factor authentication and least privilege access. What’s more, you get an architecture literally designed for Zero Trust with Azure’s built-in networking capabilities—spanning micro-segmentation to firewalls.

Modern: Security fueled by AI and the scale of the cloud

When you’re leveraging the power of AI and the scale of the cloud, defenders can protect, detect, and respond at a pace that enables them to get ahead of threats. It’s here that Microsoft’s wholesale commitment to security truly shines. Azure’s security approach is also very modern, especially compared to the tools that customers are using on-premises.

Innovate securely with Azure, Azure Exam Prep, Azure Tutorial and Material, Azure Preparation, Azure Career, Azure Learning, Azure Guides
The AI used in Azure security solutions is powered by threat intelligence from across Microsoft’s entire security portfolio, encompassing trillions of signals per day and a large diversity of signals from the Microsoft Cloud. This allows Azure solutions to prioritize the most important incidents to raise to the security team, drastically cutting down on noise and saving SecOps precious time.

In addition, cloud-scale means that you always have the capacity you need, without investing in infrastructure setup and maintenance.

Holistic: Secure your entire organization, including Azure, hybrid, and multi-cloud

The attacks we’ve seen in recent years have proven that the age of effective point solutions is long over. Relying on a patchwork of disparate security solutions not only makes it harder for security teams to do their jobs—forcing them to pivot between many different tools—it also introduces far too many gaps for attackers to slip between.

That’s why it’s so important that security is holistic. Azure security solutions don’t just help you protect Azure—they protect your whole organization, including multi-cloud and hybrid environments. This gives you a unified view of your entire environment and enables SecOps to be more efficient with fewer tools.

For example, at the development phase, GitHub Advanced Security helps secure code deployed on any cloud. SecOps can get a bird’s eye view of your entire organization, including other clouds and your non-Microsoft security ecosystem, with Azure Sentinel, Microsoft’s cloud-native SIEM. Or, take it to the next level with integrated SIEM and XDR with Azure Sentinel, Azure Defender, and Microsoft 365 Defender and get comprehensive coverage combined with a view of the more important incidents that need attention immediately.

Plus, manage your cloud security posture across Azure, AWS, Google Cloud, and on-premises within one user experience in Azure Defender.

Where do you start?

Azure’s built-in, modern, and holistic solutions drastically simplify the process of securing your estate. But where do you start? Security is a shared responsibility. As an Azure customer here are five steps that we advise you to take now, whether you’re a new customer or an existing user:

1. Turn on Azure Secure Score. Azure Secure Score, located in Azure Security Center, gives a numeric view of your Azure security posture.

2. Turn on multi-factor authentication. Identity is such an important threat vector, and multi-factor authentication significantly reduces risk.

3. Turn on Azure Defender for all cloud workloads. Azure Defender protects against threats like remote desktop protocol (RDP) brute-force attacks, SQL injections, attacks on storage accounts, and much more. You can turn on Azure Defender with just a few clicks.

4. Turn on Azure WAF and DDoS protection for every website. This will ensure your web applications are protected from malicious attacks and common web vulnerabilities.

5. Turn on Azure Firewall for every subscription to protect Azure virtual networks.

Ongoing, it’s important that you assign a team member or partner to raise your Azure secure score percentage and engage your security operations team to action important incidents. This goes a long way towards improving your cloud security posture and lowering security risk.

Source: microsoft.com

Saturday 28 August 2021

Genomics testing on the ISS with HPE Spaceborne Computer-2 and Azure

This morning Microsoft News published a story about the use of Azure, enabled by HPE’s Spaceborne Computer-2 on the International Space Station (ISS). The project was designed to overcome the limited bandwidth between ISS and Earth by validating the benefits of a computational workflow that spans edge and cloud. Under this workflow, examination of high-volume raw data is processed and performed on the ISS using the HPE Spaceborne Computer-2’s edge computing platform and a much smaller data set containing only “interesting bits” is sent to Earth, where cloud resources are used to perform compute-intensive analysis to determine what those interesting bits really mean.

The Azure Space team performed the software development needed for the entire experiment in just three days.

A brief background

The International Space Station (ISS), a microgravity and space environment research laboratory, has just observed 20 years of continuous human presence. New technology is delivered to it regularly, as needed to keep up with the research being performed. Computers used on the ISS have typically been custom-built with specialized hardware and programming models, needed to deliver the reliability needed in space. Unfortunately, the developer experience for targeting these custom spaceborne systems is complex, making programming slow and challenging compared to the commercial-off-the-shelf systems used by most developers today.

Installed in 2017, Spaceborne Computer-1, designed by HPE, validated that a modern, commercial-off-the-shelf computer could survive a launch into space, be installed by astronauts, and operate correctly on the ISS—without “flipping bits” due to increased radiation in space. Basically, it was a year-long test to see if the computer hardware used on Earth would function normally in space. Building on this success, HPE’s Spaceborne Computer-2, an edge computing platform with purposely designed features for harsh environments, was installed in April 2021 to deliver twice as much compute performance, and for the first time, artificial intelligence (AI) capabilities to advance space exploration and research by enabling the same programming models and developer experiences used on Earth.

In many ways, Spaceborne Computer-2, which is comprised of the HPE Edgeline EL4000 Converged Edge system and HPE ProLiant DL360 Gen10 server, is the ultimate edge computing device platform, putting a game-changing amount of compute at the edge of space. However, the real limiting factor is the bandwidth between the ISS and Earth. Although Spaceborne Computer-2 supports the maximum available network speeds, it only receives from NASA an allocation of two hours of communication bandwidth a week to transmit data to earth, with a maximum download speed of 250 kilobytes per second.

In some cases, working around limited bandwidth can be accomplished by HPE helping researchers to compress data on Spaceborne Computer-2 before sending it down to Earth. In other cases, the data can be fully analyzed in space without needing to use the slow downlink at all. But what about research that requires more compute or bandwidth than what Spaceborne Computer-2 can provide, or that can be allotted to a single experiment among many? To address such scenarios, HPE applied its vision for an “edge to cloud” experience, in which Spaceborne Computer-2 is used to perform preliminary analysis or filtering on large data sets, extract what’s interesting or unexpected, and then burst those results down to Earth and into the public cloud for full analysis.

The Azure Space experiment

The Azure Space team at Microsoft proposed an experiment that simulates how NASA might monitor astronaut health in the presence of increased radiation exposure, as exists outside of our protective atmosphere. Such exposure will only increase as astronauts venture beyond the ISS’s low-earth orbit into and beyond the Van Allen Belts.

The experiment assumes access to a gene sequencer onboard the ISS, which is used to regularly monitor blood samples from astronauts. However, gene sequencing can generate an incredible amount of data—far too much for a 2Mbps/sec downlink—and the output needs to be compared against a large clinical database that’s constantly being updated.

To overcome those limitations, the experiment uses HPE Spaceborne Computer-2 to perform the initial process of comparing extracted gene sequences against reference DNA segments and capture only the differences, or mutations, which are then downloaded to the HPE ground station.

On earth, the data is uploaded to Azure, where the Microsoft Genomics service does the computational “alignment” work—the process of matching the short base-pair gene sequence reads in the downloaded data (which are about 70 base pairs in length) against the full 3 giga-base-pair human genome, as required to determine where in the human genome each mutation is located and the type of change (deletion, addition, replication, or swap). Aligned reads are then checked against the National Institute for Health’s dbSNP database to determine what the health impacts of a given mutation might mean. Watch the video below to see Azure in action.

Azure Exam Prep, Azure Tutorial and Material, Azure Certification, Azure Learning, Azure Career, Azure Guides

Development process and computational workflow


The entire experiment was coded by 10 volunteers from the Azure Space team and its parent organization, the Azure Special Capabilities, Infrastructure, and Innovation Team. All major software components (both ISS-based and Azure-based) were written in Python and bash using Visual Studio Code, GitHub, and the Python libraries for Azure Functions and Azure Blob Storage. David Weinstein, Principal Software Engineering Manager at Azure Space, led the three-day development effort—consisting of a one-day hackathon and two days of cleanup.

The following graphic shows the computational workflow. It starts on the ISS, on Spaceborne Computer-2, which runs Red Hat Linux 7.4.

Azure Exam Prep, Azure Tutorial and Material, Azure Certification, Azure Learning, Azure Career, Azure Guides

In space

◉ A Linux container hosts a Python workload, which is packaged with data representing mutated DNA fragments and wild-type (meaning normal or non-mutated) human DNA segments. There are 80 lines of Python code, with a 30-line bash script to execute the experiment.

◉ The Python workload generates a configurable amount of DNA sequences (mimicking gene sequencer reads, about 70 nucleotides long) from the mutated DNA fragment.

◉ The Python workload uses awk and grep to compare generated reads against the wild-type human genome segments.

◉ If a perfect match cannot be found for a read, it’s assumed to be a potential mutation and is compressed into an output folder on the Spaceborne Computer-2 network-attached storage device.

◉ After the Python workload completes, the compressed output folder is sent to the HPE ground station on Earth via rsync.

On Earth

◉ The HPE ground station uploads the data it receives to Azure, writing it to Azure Blob Storage through azcopy.

◉ An event-driven, serverless function written in Python and hosted in Azure Functions monitors Blob Storage, retrieving newly received data and sending it to the Microsoft Genomics service via its REST API.

◉ The Microsoft Genomics service, hosted on Azure, invokes a gene sequencing pipeline to “align” each read and determine where, how well, and how unambiguously it matches the full reference human genome. (The Microsoft Genomics service is a cloud implementation of the open-source Burroughs-Wheeler Aligner and Genome Analysis Toolkit, which Microsoft tuned for the cloud.)

◉ Aligned reads are written back to Blob Storage in Variant Call Format (VCF), a standard for describing variations from a reference genome.

◉ A second serverless function hosted in Azure Functions retrieves the VCF records, using the determined location of each mutation to query the dbSNP database hosted by the National Institute of Health—as needed to determine the clinical significance of the mutation—and writes that information to a JSON file in Blob Storage.

◉ Power BI retrieves the data containing clinical significance of the mutated genes from Blob Storage and displays it in an easily explorable format.

The Aligner and Analyzer functions total about 220 lines of code, with the Azure services and SDKs handling all of the low-level “plumbing” for the experiment. The functions are automatically triggered by blob storage uploads and are configured to point to the right storage accounts—requiring just a small amount of code to parse the raw data and query Microsoft Genomics and the dbSNP database at runtime.

Develop and test

During development and test, developers didn’t have access to HPE Spaceborne Computer-2 or the HPE ground station, so they recreated those environments on Azure, relying on GitHub Codespaces to further increase their velocity. They packaged both the ISS and ground station environments into an Azure Resource Manager (ARM) template, which simulates the latency between the ISS and the ground station by deploying the Spaceborne Computer-2 environment to an Azure data center in Australia and the ground station environment to one in Virginia.

The results

On August 12, 2021, the 120MB payload containing the experiment developed by Azure Space was uploaded to the ISS and run on Spaceborne Computer-2. The experiment is configurable, so Azure Space was able to execute “test”, “small”, and “medium” scenarios, executed in that order.
Table 1 shows the results of the experiment in terms of processing times and data volumes:

  Test Small  Medium 
Raw data examined 500KB  6MB 150MB
Downloaded to Earth  4KB  40KB  900KB 
Run time on ISS  20 seconds  2 minutes  1 hour 
Download time from ISS  <1 second  2 seconds  17 seconds 

The experiment’s successful completion—and the data collected through it—is proof of how an edge-to-cloud computing workflow can be used to support high-value use cases aboard the ISS that might otherwise be impossible due to compute and bandwidth constraints. Without preprocessing the simulated output of the gene sequencer on the ISS to filter out only the gene mutations, 150 times as much data would need to be downloaded to Earth. Thus, a 200GB raw full human genome read which would require over two years to download given bandwidth and downlink window constraints, could be filtered to 1.5GB—which can be transmitted in just over an hour. Microsoft expects planned tests to further increase this ratio.

Similarly, attempting to perform all of the processing that’s being done on Azure would require uploading a copy of the full reference human genome and a copy of the full dbSNP database. To complicate matters, the dbSNP database is constantly being updated and peer-reviewed by scientists across the globe, meaning that regular synchronization would be required to maintain a useful copy in space.

Build cloud applications productively, anywhere


From a software development perspective, the developer velocity with which Azure Space delivered the experiment is as impressive as its results—with all components delivered over a three-day period using serverless Azure Functions written in Python, and best-in-class developer tools such as Visual Studio Code and GitHub. To support the development of additional experiments by others, Weinstein’s team at Azure Space plans to publish the Resource Manager templates containing the simulated ISS and ground station environments they used for development and test.

Making such capabilities available to others is just one early step for Azure Space, a new vertical within Microsoft that was publicly announced about a year ago. Its twofold mission: to enable organizations who build, launch, and operate spacecraft and satellites and to “democratize the benefits of space” by enabling more opportunities for all actors, large and small, in much the same way that support for open source on Azure has helped democratize cloud computing. One such example is Azure Orbital, a ground station as-a-service that provides communication and control for satellite operators—including the ability to easily process satellite data at a cloud-scale.

Source: microsoft.com

Thursday 26 August 2021

Discover and assess ASP.NET apps at-scale with Azure Migrate

Azure Migrate, Azure Tutorial and Materials, Azure Guides, Azure Career, Azure Preparation

Azure Migrate is your central place in the Azure portal that provides a unified experience to discover, assess, and migrate your servers, databases, and web apps to Azure. With a range of options across IaaS, PaaS, CaaS, and Serverless, Azure provides best-in-class flexibility and choice of platforms for your migrated workloads. With Azure Migrate we are making it easier than ever to migrate your applications to the Azure platform that best suits your business requirements. Last week we announced multiple enhancements to the App Containerization tool to help customers looking to adopt containers to run their migrated workloads. Today, we are announcing the preview of at-scale, agentless discovery, and assessment of ASP.NET web apps to help you migrate ASP.NET web apps and run them natively on Azure App Service.

With this preview, you can now easily discover ASP.NET web apps running on Internet Information Services (IIS) servers in a VMware environment and assess them for migration to Azure App Service. Assessments will help you determine the migration readiness of the web apps, migration blockers and remediation guidance, recommended SKU, and cost of hosting your web apps in App Service.

Unified onboarding for servers, databases, and web apps

Azure Migrate appliance for VMware helps with discovery, assessment, software inventory, application dependency analysis, and migration.

◉ Deploy a new Azure Migrate on-premises appliance or upgrade your existing appliance to start discovering your ASP.NET web apps. You can also use the appliance to inventory installed software and perform agentless dependency analysis.

◉ You do not need to provide separate credentials for web apps discovery. Credentials provided for dependency analysis are sufficient for web apps discovery. Please note that the account used should have local admin privileges on the source server to use these features.

◉ You can provide multiple credentials (domain, non-domain, and SQL authentication) for discovery. Azure Migrate appliance will automatically map server and database credentials across the entire estate. Moreover, credentials are encrypted and stored on your appliance in your datacenter. Credentials are not sent to Microsoft.

Web apps discovery

◉ You can discover up to 20,000 web apps with one Azure Migrate appliance.

◉ Web apps discovery surfaces information such as web app name, web server type and version, framework, URL, binding port, and application pool.

◉ You can also use the agentless dependency analysis feature to identify application tiers or interdependent applications. This information is useful when you need to plan migration for interdependent servers.

Azure Migrate, Azure Tutorial and Materials, Azure Guides, Azure Career, Azure Preparation

Azure App Service assessment and cost planning


Azure Migrate assessments provide you rightsizing and migration readiness recommendations. Once you have discovered your entire estate, you can start creating assessments for Azure IaaS, Azure VMware Solution, and Azure SQL. Now, you will also be able to create at-scale assessments of your web apps to determine their readiness for migration to Azure App Service, and to get recommended SKU, and cost details.

◉ You can customize assessments as per your requirements with the ability to customize assessment properties like target Azure region, application isolation requirements, and reserved instance pricing.

◉ You’ll get best-fit recommendations for the App Service SKU that the web app should be migrated to.

◉ In addition to migration readiness information, blockers and issues are surfaced so that you can mitigate them as needed.

◉ You can also modify assessment inputs at any time or create multiple assessments for the same set of web apps to compare and identify the target Azure options that work best for you.

Source: microsoft.com

Thursday 19 August 2021

Optimize your private mobile network and accelerate innovation with hyperscale cloud

Azure Hyperscale Cloud, Azure Exam Prep, Azure Certification, Azure Career, Azure Prep

The power of 5G: faster speeds, lower latencies, improved cost efficiencies, rich capabilities

The evolution of 5G has empowered enterprise organizations to do more. With the support of high-speed, high-bandwidth connections, and low-latency computing, these enterprises have been enabled to fuel entirely new forms of content and experiences. The convergence of 5G, multi-access edge computing (MEC), and the cloud make it possible for operators, managed service providers, and enterprises to create private wireless networks that are ultra-fast, secure, and scalable and that can take advantage of powerful cloud applications for analytics.

The future is poised to include innovative, far-reaching, and highly sophisticated services—the beginnings of which we can already see developing in smart cities and in many Industry 4.0 applications. Because of a broad range of factors—rising volumes of data, increased need for security and mobility, and demand for real-time processing, to name a few—private mobile networks are increasingly being viewed as a critical enabler in enterprise digital transformation. With the unique features of 5G already providing the foundation for sophisticated new services, Microsoft believes that private mobile networks represent a new way forward for emerging enterprise applications.

Recently, we’ve outlined how Azure private MEC enables operators and systems integrators to deliver private cellular, enterprise workloads, and edge compute services with ease, offering a robust partner network while combining computing, networking, and services on hardware located at the edge. In this article, we outline how bringing the power of hyperscale cloud to operator-enabled private mobile networks can lower capital expenditures (CapEx) and operating expenses (OpEx), accelerate the pace of innovation, and open up new and exciting revenue opportunities.

Partnering for private mobile networks

As enterprises look to take advantage of edge computing, they will need the combined expertise of operators for 5G technologies, and their partners for cloud and edge computing. Enterprises will benefit by partnering with operators and Managed Services Providers (MSPs) when deploying and managing 5G networks. Operators bring extensive experience in managing sophisticated mobile networks and have the tools and applications for end-to-end management. MSPs offer application diversity through their well-established partner ecosystem, allowing enterprises the deployment flexibility of managed services or turnkey implementation.

The proliferation of industry-specific use cases means there is no “one-size-fits-all” solution. A private mobile network applied to a remote oil-drilling platform, for example, will not meet the needs of an automotive manufacturer. To meet the diverse connectivity requirements, networks must be built with flexibility and simplicity in how they are configured, deployed, and managed.

An ideal solution

The key components of an effective private mobile network combine hyperscale cloud, private multi-access edge compute (MEC), LTE/5G mobile core, and end-to-end orchestration and management abilities into one easily deployable solution. When functioning in an optimal capacity, private mobile networks provide:

◉ Managed connectivity—operators can leverage the ecosystem of partners to address any connectivity needs of the enterprises, taking advantage of a substantial opportunity to deliver greater value.

◉ Managed services—these address the RAN, core, and edge components and package them within a centralized environment in the cloud, offering a single viewpoint.

◉ Self-management options—portals and dashboards can be implemented to offer service customization and visibility with ease, delivering true value to the enterprise in the form of service-level assurance.

Azure Hyperscale Cloud, Azure Exam Prep, Azure Certification, Azure Career, Azure Prep

What Azure offers: An integrated approach


Our approach to private mobile networks provides operators and managed service providers with a completely integrated solution, from the mobile core to the edge to the cloud. Our solution architecture delivers unique advantages to both operators and enterprises, such as integration with multiple radio access technologies and an advanced edge computing platform that can host both the mobile core and O-RAN components (such as vCU and vDU) from within our partner ecosystem.

The solution architecture features several key components: Azure Stack Edge, Azure Network Function Manager, and Affirmed mobile core (4G/5G) network functions.

First, Azure Stack Edge enables operators and systems integrators to easily deliver ultra-low-latency networking, applications, and services at the enterprise edge. It provides a single point for processing mobile network data, and integration with Affirmed 4G/5G mobile core technology enables local, intelligent breakout of data processing and seamless data sharing for faster processing and lower bandwidth consumption. As a fully managed compute and connectivity solution, customers stay focused on developing new scenarios and revenue options.

Next, one must consider scalability as an essential need with private network deployment. Microsoft’s approach to provisioning, deployment orchestration, and management provides the speed, agility, and automation required to deploy and manage private mobile networks at scale, automating the lifecycle management of private network services in the process.

Azure Network Function Manager (NFM), a cloud-native orchestration service allows customers to deploy and provision network functions on Azure Stack Edge Pro, for a consistent hybrid experience using the Azure portal. The consistency comes from using a familiar platform like Azure Marketplace to pick from among a curated list of pre-validated offers (to ensure proper operation at the edge), and then Azure Portal to deploy network functions as managed applications.

Lastly, the Affirmed 5G Mobile Core is a fully virtualized, cloud-native solution that includes all standard 5G core network functions, with integrated virtualized network probes plus enhanced functionality such as Wi-Fi interworking and service automation.

The benefits of Microsoft’s private mobile network approach are clear


Microsoft’s fully integrated, cloud-native approach to private mobile networks has clear benefits to operators and MSPs. These range from providing a proven 5G mobile core architecture as the foundation for a carrier-grade network experience, to offering a cloud-managed solution that’s designed to meet their unique needs. Simplified administration, security, and operation come via automated lifecycle management, and service assurance that meets five-nines availability to support mission-critical applications. Operators and MSPs can now solve the critical infrastructure challenges involved with managing and deploying a private mobile network and monetize the enterprise opportunity.

Source: microsoft.com

Tuesday 17 August 2021

Azure Government Top Secret now generally available for US national security missions

Today we’re announcing the general availability of Azure Government Top Secret, a significant milestone in our commitment to bringing unmatched commercial innovation to our government customers across all data classifications. This announcement, together with new services and functionality in Azure Government Secret, provides further evidence of Microsoft’s relentless commitment to the mission of national security, enabling customers and partners to realize the vision of a multi-cloud strategy and achieve greater agility, interoperability, cost savings, and speed to innovation.

We've worked in close collaboration with the US Government to build a cloud portfolio that serves the national security mission and empowers leaders across the Intelligence Community (IC), Department of Defense (DoD), and Federal Civilian agencies to innovate securely wherever the mission requires and at all data classifications, with a continuum of technology from on-premises to cloud to the tactical edge.

Azure Government, Azure Exam Prep, Azure Tutorial and Material, Azure Career, Azure Online Guides, Azure Prep

Launching with more than 60 initial services and more coming soon, we’ve achieved the Authorization to Operate (ATO) of Azure Government Top Secret infrastructure in accordance with Intelligence Community Directive (ICD) 503 and facilities accredited to meet the ICD 705 standards. These new air-gapped regions of Azure will accelerate the delivery of national security workloads classified at the US Top Secret level. In addition, we now have 73 services in Azure Government Secret, and we continue to bring new services into the boundary aligned to mission priorities.

Whether in sea, land, air, space, or cyberspace, today’s mission leaders face a common set of challenges—how to make sense of an unprecedented influx of data from many disparate sources, how to modernize existing infrastructure to enable agility today and tomorrow, and how to protect data, assets, and people across a rapidly evolving global threat landscape.

In addition to the 95 percent of Fortune 500 companies that use Azure, mission leaders choose Azure to bring market-leading commercial innovation to government, enabling faster insights from data, greater agility and interoperability to meet the demands of the mission, and unified cybersecurity capabilities to protect the nation’s most critical data. Some examples in each of these areas are below.

Build a unified data strategy for mission—anywhere


Across data classifications, mission owners are working to make sense of massive volumes of data—reshaping how information is gathered, stored, processed, and shared to unlock the power of data to inform decision making. Azure’s data capabilities are limitless from ground to cloud, enabling customers to synthesize data no matter where that data is located and no matter where insights are needed, including data at the far edge, in disconnected or intermittent scenarios, or data fed in by satellite or submarine.

The new Azure regions for highly classified data expand the ability of our national security customers to harness data at speed and scale for operational advantage and increased efficiency, with solutions such as Azure Data Lake, Azure Cosmos DB, Azure HDInsight, Azure Cognitive Services, and more. Built into a unified data strategy, these services help human analysts more rapidly extract intelligence, identify trends and anomalies, broaden perspectives, and find new insights. With common data models and an open, interoperable platform that supports entirely new scenarios for data fusion, mission teams use Azure to derive deeper insights more rapidly, empowering tactical units with the information needed to stay ahead of adversaries.

Azure Government, Azure Exam Prep, Azure Tutorial and Material, Azure Career, Azure Online Guides, Azure Prep

Azure teams are also working shoulder-to-shoulder with our customers and partners to deliver purpose-built solutions on the Azure platform, including solutions to improve data insights. For example, to enable data fusion across a diverse range of data sources, we’ve built a solution accelerator called Multi-INT enabled discovery (MINTED) that leverages raw data and metadata as provided and enriches the data with machine learning techniques. These techniques are either pre-trained or unsupervised, providing a no-touch output as a catalyst for any analytic workflow. This becomes useful for many initial triage scenarios, such as forensics, where an analyst is given an enormous amount of data and few clues as to what’s important.

In addition, we’re continuing to deliver innovation to enable a diverse partner ecosystem from silicon to space, for example, enabling DoD to strengthen the US microelectronics supply chain through our work with the commercial and defense industrial base on the Rapid Assured Microelectronics Prototypes (RAMP) program. Today’s announcement also expands the options available for government agencies and partners to utilize Azure Space solutions from their native network environments to unlock new capabilities across any data classification. For example, with increased connectivity and ready access to data, mission owners can harness large-scale geospatial and space data for new scenarios such as machine learning, synthetics, visualization, emulation of space missions, and more.

Modernize mission systems for speed to innovation


Getting new mission capabilities into the hands of analysts and warfighters requires organizations to modernize existing systems and architect an interoperable enterprise. Azure provides a secure foundation for this innovation, with an open platform that enables developers to build with their choice of languages, tools, platforms, and frameworks, industry-leading tools for true cloud-native application development, and modern DevSecOps capabilities that can accelerate the path to Authority to Operate (ATO).

New services in Azure Government Top Secret such as Azure Kubernetes Service (AKS), Azure Functions, and Azure App Service enable mission owners working with highly sensitive data to deliver modern innovation such as containerized applications, serverless workloads with automated and flexible scaling, and web apps supported by built-in infrastructure maintenance and security patching.

“In our work supporting mission-critical customers who want to innovate faster, Azure provides several key advantages. It’s flexible, truly hybrid, and has the openness and extensibility that allows for developer choice. This enables customers to benefit from the latest commercial innovations to derive insights from their most sensitive data.”—Larry Katzman, Chief Executive Officer & President, Applied Information Sciences

With multiple geographically separate regions, Azure Government Top Secret provides customers with multiple options for data residency, continuity of operations, and resilience in support of national security workloads. Natively connected to classified networks, Azure Government Top Secret also offers private, high-bandwidth connectivity with Azure ExpressRoute. These new regions deliver a familiar experience and alignment with existing programs, enabling mission teams to build low and deploy high with consistency across governance, identity, development, and security.

Protect the nation’s data with connected cybersecurity


Protecting our nation from rapidly evolving threats is a critical priority. Microsoft brings together massive signal depth and diversity of over 8 trillion signals per day combined with cutting-edge AI, machine learning, and a global team of security experts to deliver unparalleled protection.

For example, we understand the complex nature of nation-state cyberthreats and mobilize all our security analyses and products to discover, track, and defend our customers against them. Our approach rests on a thorough understanding of the tactics and techniques these groups use, their targeting patterns, and the possible objectives driving their activity. These insights, along with the fidelity Microsoft signals provide, allow us to better spot emerging malicious campaigns, warn customers about the activity, and implement protections against them.

To develop a unified cybersecurity approach to protect the nation’s data, mission owners can utilize products informed by this threat intelligence, including Azure Security Center and Azure Sentinel to integrate multiple security point solutions and continually assess, visualize, and protect the security state of resources in Azure, on-premises, and in other clouds. Both are now available in Azure Government across all data classifications.

Azure Security Center scans your hybrid environment continuously, providing recommendations to help you harden your attack surface against threats. Azure Sentinel enables you to collect data at cloud scale—across all users, devices, applications, and infrastructure, both on-premises and in multiple clouds; detect previously uncovered threats and minimize false positives; investigate threats and hunt suspicious activities at scale; and respond to incidents rapidly with built-in orchestration and automation of common tasks.

Source: microsoft.com

Saturday 14 August 2021

How to re-size Azure virtual networks that are peered—now in preview

As customers adopt and scale their applications in Azure, they constantly need to grow or resize their networks in the cloud. Virtual networks in Azure have had a long-standing constraint where any address space change is only allowed if the virtual network does not have any peerings. Today, we are announcing that this limitation has been lifted, and customers can freely resize their virtual networks without incurring any downtime. With this feature, existing peerings on the virtual network do not need to be deleted prior to adding or deleting an address prefix on the virtual network.

Read More: AZ-303: Microsoft Azure Architect Technologies

Details on adding or removing address space on peered virtual networks

You can update (add or remove) address space on a virtual network that is peered with another virtual network in the same region or across regions. Address space update on virtual networks also works if the virtual network has peered with another virtual network across subscriptions. Note: Virtual networks that have peerings across AD-tenants are currently not supported. This feature introduces two new properties on the virtualNetworkPeerings object of the virtual network:This feature introduces two new properties on the virtualNetworkPeerings object of the virtual network:

◉ remoteVirtualNetworkAddressSpace: Contains the most current address space of the peered virtual network. This address may or may not be the same as the peered address contained in the remoteAddressSpace property.

◉ peeringSyncLevel: Indicates if the address contained in the remoteVirtualNetworkAddressSpace property is the same as the address that is actually peered with the virtual network.

When address space on a virtual network is updated, the corresponding peering links on the remote virtual networks towards this virtual network need to be synced with the new address space. The status of the peering links between the two virtual networks indicates which side of the peering link needs to be synced with the new address space.

◉ LocalNotInSync: When you update the address space on the first virtual network, the peering status of the link from the second virtual network to the first virtual network is LocalNotInSync. At this stage, while the peering is active across the old address space of the virtual network, the new address space has not peered with the remote virtual network.

◉ RemoteNotInSync: When you update address space on the first virtual network, the peering status of the link from the first virtual network to the second virtual network is RemoteNotInSync. A sync operation on the peering link from the virtual network to the virtual network will synchronize the address space across the peering.

◉ Note: Address changes on virtual networks in ARM that have peerings to ASM virtual networks is enabled, however the ASM virtual network will not be updated with the new address space of the ARM virtual network. 

The feature is currently in preview and supported across all production Azure regions.

Get started today

Updating the address space on a virtual network that is peered can be accomplished in two easy steps. It is supported through REST APIs as well as Portal and PowerShell clients.

1. Add a new address on a virtual network that has active peering connections with other virtual networks.

◉ To do this in PowerShell, refer to the AzVirtualNetwork documentation.

◉ To do this through the Azure portal, refer to the address range documentation.

2. Perform a “sync” on the peering link from each of the peered remote virtual networks to this virtual network on which the address change is made. This action is required for each remote peered VNet to learn of the newly added address prefix.

To do this on the Azure portal, go to the peerings tab on the virtual network where the address update has been made. Select all the peerings that have peering status as “Remote sync required”, and then click the Sync button. This will ensure that all the remote peered virtual networks learn the updated address space of this virtual network.

Azure Exam Prep, Azure Learning, Azure Certification, Azure Preparation, Azure Career, Azure Tutorial and Materials

The sync can also be performed individually on the peering link from each remote peered virtual network by going to the peerings tab on the remote virtual networks.

Azure Exam Prep, Azure Learning, Azure Certification, Azure Preparation, Azure Career, Azure Tutorial and Materials

To do this in PowerShell, use the commandlet: Sync-AzVirtualNetworkPeering on each peering link from the remote virtual network to the virtual network on which the address change is made.

While the feature is in preview, customers need to register their subscriptions with the following feature flag: Microsoft.Network/AllowUpdateAddressSpaceInPeeredVnets to use this capability. This flag will be deprecated when the feature is generally available.

Check out the video to successfully add a new address prefix on an Azure Virtual Network that is peered to another virtual network, using the Azure portal.

Source: microsoft.com

Friday 13 August 2021

How You Benefit By Completing the Microsoft MS-700 Exam with Practice Test

The organizations are basically composed of people who make teams. This is why they are seeking professionals with proficiency in team management. The individuals interested in the role of Teams Administrator can prove their knowledge and skills by achieving Microsoft 365 Certified - Teams Administrator Associate certificate by passing the MS-700 exam.

Earning this certification qualifies the candidates to take up this associate-level job role and confirms that they have expertise in configuring, managing, and deploying Office 365 workloads across Microsoft Teams. They concentrate on robust and proficient partnerships and communications within the enterprise environment. The professionals with this certificate hold various skills, including utilizing PowerShell, SharePoint, OneDrive, Azure Active Directory, Exchange, and Office 365 Groups.

Overview of Microsoft MS-700 Exam

This exam can be taken in only the English language. It evaluates the expertise of the applicants in managing Teams & application policies, planning & configuring the Microsoft Teams environments, and managing meetings, chats, and calling. Keep in mind that you must acquire a solid grasp of these knowledge areas before attempting the exam to get a flying score. MS-700 exam cost is $165. The exam registration process can be done via the Pearson VUE platform.

The applicants are expected to answer about 40-60 questions of different formats, such as multiple-choice, active screen, drag and drop, and performance-based, among others. The duration of the exam is 120 minutes. To pass the Managing Microsoft Teams MS-700 exam, the applicants must obtain a passing score of 700 out of 1000 points. There is no negative marking. So, make sure you attempt all the exam questions to better your score.

How Long Does It Take to Microsoft MS-700 Exam?

Microsoft MS-700 is a mid-level certification exam meant for administrators. Suppose you are currently working as an Administrator and well-versed with Microsoft Teams' basic concepts and administering Microsoft Teams in your job position. In that matter, you should assign around two to three weeks to prepare for this Microsoft certification exam.

For the most part, you have to work on your exam-taking skills, and you can do this by learning all the MS-700 exam topics thoroughly and performing practice tests.

For the initial three days, you should concentrate on becoming familiar with the exam topics. For the next two weeks, you should learn from study guides, training courses. And after that, you should take up practice tests from a reliable and trusted platform. You must ace the exam within the given time limit.

For the individuals with little to no experience with the exam domains and skill areas, it may take approximately six weeks or more to become familiar with all the concepts and acquire mastery in Microsoft MS-700 exam topics.

How Hard Is the Microsoft MS-700 Exam?

The Microsoft certification exams are generally not easy. You are expected to prove your high-level competence, and you have to make an effort to acquire such skills. With the addition of more services like managing calls and meetings and features into the Microsoft certification exams, the difficulty level is multiplying. However, those applicants who commit time to prepare for their exams often achieve success on the first try.

How Microsoft 365 Certified - Teams Administrator Associate Certification Will Benefit IT Professionals?

Here is what you get by passing the Managing Microsoft Teams MS-700 exam:

  • You will acquire the essential knowledge and skills with the subject areas of the exam that involve organizing and configuring an environment relating to Microsoft Teams, managing to call, chat along with meetings, and managing Teams in addition to app policies.
  • You will become more efficient in your performance because you will possess relevant skills and exposure.
  • Passing the MS-700 exam and becoming Microsoft 365 Certified - Teams Administrator Associate implies job security. Most managers in the hiring authorities of organizations appreciate the certification you have achieved, thus assuring that you are engaged even when your organization is downsizing.
  • It will prove your level of dedication. This means that you will be standing out from others or the group. Your skill in Microsoft Teams is going to reward you in a great way with amazing job opportunities.
  • When you take the Microsoft MS-700 exam, you will promote success in the business or enterprise you are employed in. Your Microsoft Teams Administration capabilities will assist your employer by ensuring success in the daily operations of their business.

Conclusion

It is vital to point out that you will find many study materials for MS-700 exam preparation. And with more dedication to study, you can pass your certification exam with good scores. Benefits of the Microsoft 365 Certified: Teams Administrator Associate certification is infinite, so pass this exam and enjoy the benefits that come with this certification.

Thursday 12 August 2021

Accelerate application modernization with Azure Migrate: App Containerization

As organizations embark on cloud-centric business transformation to become more agile and operationally efficient, application modernization is fast becoming a cornerstone to realize these goals. In this blog, we will see how the Azure Migrate: App Containerization tool helps you accelerate application modernization as you migrate applications to Azure.

With Azure, you have the flexibility to choose from a variety of hosting options for your applications spanning the spectrum of infrastructure-as-a-service (IaaS), platform-as-a-service (PaaS), containers-as-a-service (CaaS), and serverless. Azure offerings such as Azure App Service and Azure Kubernetes Service (AKS) are helping application developers focus on solving the most important business problems. With built-in infrastructure maintenance, security patching, and scaling, Azure App Service is a purpose-built platform for running web-based applications. Azure App Service has native support for the most popular application frameworks and additionally lets you also run containerized applications using custom Windows or Linux containers. AKS is a great choice for running containerized applications when you want more control over your infrastructure.

Containers have become extremely popular for building and managing modern applications. Deploying applications as containers allows application owners and IT decision-makers to realize consistent, reliable, and repeatable deployments across environments. Benefits also include improved infrastructure utilization and standardization of management practices through better separation of concerns. Customers are increasingly using containers as units of application deployment not only for distributed applications designed to use microservices architecture but also for other applications that they re-platform without significant alterations. Re-platforming is often the first step before embarking on further modernization of the application. Customers increasingly prefer to re-platform to containers in many cases where fully rearchitecting or refactoring an application may not be necessary or prudent.

At Microsoft Ignite earlier this year, we announced the preview of the Azure Migrate: App Containerization tool to help you easily containerize and migrate apps to AKS. Based on the feedback we’ve received, today we are announcing multiple enhancements to the preview.

Repackage applications with Azure Migrate: App Containerization

The App Containerization tool offers a point-and-containerize approach to repackage applications as containers with minimal to no code changes by using the running state of the application. The tool currently supports containerizing ASP.NET applications and Java web applications running on Apache Tomcat.

Azure Migrate, Azure Exam Prep, Azure Certification, Azure Tutorial and Materials
Figure 1: Modernization capabilities of Azure Migrate: App Containerization

The tool lets you:

◉ Remotely connect to the application server to discover applications and their configurations using the standalone App Containerization utility.

◉ Parameterize application configurations and dependencies such as database connection strings to facilitate consistent and repeatable deployments.

◉ Externalize static content and application state stored on the file system by moving them to persistent storage.

◉ Build and publish container images using Azure Container Registry tasks.

◉ Customize and reuse tool-generated artifacts such as Dockerfiles, container images, and Kubernetes resource definition files for day two operations and integrate them directly into your continuous integration and continuous delivery (CI/CD) pipeline.

Azure Migrate, Azure Exam Prep, Azure Certification, Azure Tutorial and Materials
Figure 2: Downloadable and reusable artifacts for Day 2 operations

New enhancements for application deployment and management


With the latest release of the tool, you can now:

◉ Deploy containerized applications to Azure App Service containers or AKS.

◉ Use Azure Key Vault to manage application secrets such as certificates and parameterized configurations. You can also choose to use Kubernetes secrets or Azure App Service application settings to manage your secrets.

◉ Automatically integrate application monitoring for Java apps with Azure Application Insights.

Azure Migrate, Azure Exam Prep, Azure Certification, Azure Tutorial and Materials
Figure 3: Automatically enable application monitoring with Azure Application Insights

Source: microsoft.com

Tuesday 10 August 2021

Optimize your Azure workloads with Azure Advisor Score

Azure Exam Prep, Azure Tutorial and Materials, Azure Learning, Azure Preparation, Azure Career, Azure Guides

Modern engineering practices, like Agile and DevOps, are redirecting the ownership of security, operations, and cost management from centralized teams to workload owners—catalyzing innovations at a higher velocity than in traditional data centers. In this new world, workload owners are expected to build, deploy, and manage cloud workloads that are secure, reliable, performant, and cost-effective. If you’re a workload owner, you want well-architected deployments, so you might be wondering, how well are you doing today? Of all the actions you can take, which ones will make the biggest difference for your Azure workloads? And how will you know if you’re making progress?

That’s why we created Azure Advisor Score—to help you understand how well your Azure workloads are following best practices, assess how much you stand to gain by remediating issues, and prioritize the most impactful recommendations you can take to optimize your deployments.

Introducing Advisor Score

Advisor Score enables you to get the most out of your Azure investment using a centralized dashboard to monitor and work towards optimizing the cost, security, reliability, operational excellence, and performance of your Azure resources.

Advisor Score will help you:

◉ Assess how well you’re following the best practices defined by Azure Advisor and the Microsoft Azure Well-Architected Framework.

◉ Optimize your deployments by taking the most impactful actions first.

◉ Report on your well-architected progress over time.

Azure Exam Prep, Azure Tutorial and Materials, Azure Learning, Azure Preparation, Azure Career, Azure Guides

Baselining is one great use case we’ve already seen with customers. You can use Advisor Score to baseline yourself and track your progress over time toward your goals by reviewing your score’s daily, weekly, or monthly trends. Then, to reach your goals, you can take action first on the individual recommendations and resources with the most impact.

How Advisor Score works


Advisor Score measures how well you’re adopting Azure best practices, comparing and quantifying the impact of the Advisor recommendations you’re already following, and the ones you haven’t implemented yet. Think of it as a gap analysis for your deployed Azure workloads.

The overall score is calculated on a scale from 0 percent to 100 percent both in aggregate and separately for cost, security (coming soon), reliability, operational excellence, and performance. A score of 100 percent means all your resources follow all the best practices recommended in Advisor. On the other end of the spectrum, a score of zero percent means that none of your resources follow the recommended best practices. Advisor Score weighs all resources, both those with and without active recommendations, by their individual cost relative to your total spend. This builds on the assumption that the resources which consume a greater share of your total investment in Azure are more critical to your workloads. Advisor Score also adds weight to resources with longstanding recommendations. The idea is that the accumulated impact of these recommendations grows the longer they go unaddressed.

Source: microsoft.com

Sunday 8 August 2021

Introducing seamless integration between Microsoft Azure and Confluent Cloud

Real-time streaming is a business imperative for digital transformation

Customers are undergoing rapid transformations to stay competitive in an increasingly digital world with cloud design patterns and event streaming playing critical roles in their success. They depend on real-time event streaming and data processing to simplify, automate, and combine real-time results with the historical data stored in data repositories for driving business innovation. Major trends like cloud computing, artificial intelligence, ubiquitous mobile devices, and IoT have caused an explosion of data leading customers to modernize their data platforms and embrace fully managed offerings that relieve them from operational complexities.

The Azure cloud platform offers a rich set of capabilities and services in this regard, but we are always looking for ways to offer you more choice, ease of use, and delightful experiences. We are announcing the collaboration with Confluent, a popular provider of managed Apache Kafka service for event streaming. This service will enable you to provision Confluent Cloud resources using Azure client interfaces like Azure portal, command line interface (CLI), and software development kits (SDK)s. The other integration points include single sign-on using Azure Active Directory and unified billing through Azure. You have the option to draw down on existing Azure commitments towards Confluent Cloud service usage.

“The urgency for real-time applications has grown exponentially and will only continue to accelerate as more businesses transform their digital services. Event streaming powers modern applications with real-time events from across an organization. We are excited to collaborate with Microsoft to help companies stay ahead of rapid demand with Confluent Cloud seamlessly available on the Azure platform. Now, event-driven applications leveraging Apache Kafka can be built faster and easier than ever.” - Jay Kreps, Co-founder and CEO, Confluent.

Quickly provision Confluent Cloud resources within the Azure ecosystem

Azure customers who want to use third-party managed offerings, such as Confluent’s Kafka service, face challenges of having different workflows like creating and managing resources and users separately in Azure and the third-party managed offering. This process can be complex and time-consuming. Such customers expect to provision and manage Confluent Cloud resources along with their Azure resources as part of a unified workflow.

Confluent and Microsoft have worked together to build a new integration capability between Azure and Confluent Cloud which makes the customers’ journey simpler, safer, and more seamless. This service is now available as Apache Kafka on Confluent Cloud via Azure Marketplace.

The integrated streamlined experience between Azure and Confluent Cloud

The new integration enables you to create and manage Confluent organizations from the Azure client interfaces like Portal, command line interface, and Azure SDKs. You can now leverage existing Azure Active Directory identities to single-sign-on (SSO) to Confluent Cloud directly from Azure. Confluent Cloud usage can be invoiced as part of your monthly Azure bill with an option to draw down existing Azure commits. There is no additional bill or procurement steps needed. Confluent Cloud organizations can be tracked and managed with the rest of your Azure resources.

Confluent still runs and supports their SaaS application hosted on Azure which comes with environments, clusters, topics, API Keys, and managed connectors. This integration combines Azure’s global presence, flexibility, security, and compliance with Confluent’s managed Kafka capabilities to create a seamless experience for enterprise customers.

Manually connecting data from various systems across an organization can quickly become challenging due to the development efforts and maintenance involved. With Confluent Cloud, you can leverage fully managed connectors built for popular Azure and Microsoft services including Azure Functions, Azure Blob Storage, Azure Event Hubs, Azure Data Lake Storage Gen2, and Microsoft SQL Server.

Get started to provision Confluent Cloud resources from Azure

Here is the step by step process to configure Confluent Cloud resources from Azure portal. 

Setup and subscribe for Apache Kafka on Confluent Cloud. Easily discover the service in the Azure Marketplace.

Microsoft Azure, Microsoft Confluent Cloud, Azure Exam Prep, Azure Learning, Azure Preparation, Microsoft Guides

◉ Provisioning in Azure: Seamlessly provision Confluent organizations through Azure portal.

Microsoft Azure, Microsoft Confluent Cloud, Azure Exam Prep, Azure Learning, Azure Preparation, Microsoft Guides

◉ Single-sign on to Confluent Cloud: Login directly to the Confluent Cloud service without an additional sign up or login needed.

Microsoft Azure, Microsoft Confluent Cloud, Azure Exam Prep, Azure Learning, Azure Preparation, Microsoft Guides

Source: microsoft.com