Modernize your On-premises SQL Server Infrastructure by Utilizing Azure and Azure Data Studio

Data estates are becoming increasingly heterogeneous as data grows exponentially and spreads across data centers, edge devices, and multiple public clouds. In addition to the complexity of managing data across different environments, the lack of a unified view of all the assets, security and governance presents an additional challenge.

Leveraging the cloud for your SQL infrastructure has many benefits like cost reduction, driving productivity, accelerating insights and decision-making can make a measurable impact on an organization’s competitiveness, particularly in uncertain times. While infrastructure, servers, networking, etc. all by default are maintained by the cloud provider.

With SQL servers 2008 and 2012 reaching their end-of-life, it is advisable to upgrade them or migrate them to Azure cloud services. Modernizing any version of SQL server to Azure brings up many added benefits, including:

  • Azure PaaS provides 99.99% availability
  • Azure IaaS provides 99.95% availability
  • Extended security updates for 2008, 2012 servers
  • Backing up SQL Server running in Azure VMs is made easy with Azure Backup, a stream-based, specialized solution. The solution aligns with Azure Backup’s long-term retention, zero infrastructure backup, and central management features.

Tools leveraged

For modernizing the SQL infrastructure, SNP leveraged a variety of tools from Microsoft, such as the following.

  • The Azure Database Migration Service has been used since the beginning to modernize on-premises SQL servers. Using this tool, you can migrate your data, schema, and objects from multiple sources to Azure at scale, while simplifying, guiding, and automating the process.
  • Azure Data Studio is one of the newest tools for modernizing SQL infrastructure with an extension of Azure SQL Migration. It’s designed for data professionals who run SQL Server and Azure databases on-premises and in multi cloud environments.

Potential reference architecture diagram

Let’s take a closer look at the architecture, what components are involved and what is being done in Azure Data Studio to migrate or modernize the on-premises SQL infrastructure.

Among the components of Azure data studio are the source to be modernized, the destination where the on-premises SQL must be moved, and the staging layer for the backup files. Backup files are a major component of modernization.

There are various components that are involved in the Azure Data Studio migration or modernization- Source SQL server. The on-premise SQL server which is to be modernized/migrated, Destination Server- The Azure SQL VM to which the on-prem SQL server will be moved, and the staging layer (Storage Account or the Network Share Folder) for the backup files. Backup files are a major component of modernization.

Azure Data Studio and Azure SQL Migration primarily rely on backup files. It uses a full backup of the database as well as transactional log backups. Another important component is the staging layer, where backup files will be stored.

Microsoft Azure Data Studio uses a network share folder, an Azure storage container, or an Azure file. There must be a specific structure or order in which backup files are placed in either of the places. As shown in the below architecture, backup files specific to the Database must be placed in their own folders or containers.

As part of the migration to Azure, Azure Data Studio along with the Azure SQL Migration extension utilizes a technology called Data Migration Service, which is the core technology behind the scenes. It has also been integrated with Azure Data Factory, which runs the pipeline at regular intervals to copy the backup files from the on-prem network share folder to Azure thereby restoring them on the target or restoring them if they are in containers.

When the backup files are in a network share folder, Azure Data Studio uses Self Hosted Integration Run time to establish a connection between on-premises and Azure. After the connection has been established, the Azure Data Studio begins the modernization process leveraging Azure DMS.

Initially, all full and subsequent transactional log backup files of the databases are placed in a specified database folder or database container. Azure Data Studio copies backup files from network share folders to Azure storage containers if the backup files are in a network share folder.

Following this, Azure Data Studio restores them to the target Azure SQL VM or Azure SQL Managed Instance while Azure Data Studio directly restores backup files from the storage account to the Azure target if the backup files are stored in the storage account.

Following the completion of the last log restoration on the target Azure SQL database, we need to cut over the database and bring it online on the target. The databases will be placed in the Restoring mode during the restoration of the backup files, which means that we will not be able to access them until the cutover has been completed.

Your next steps

If you like what you have read so far, let’s move forward together with confidence. We are here to help at every step. Contact SNP’s migration experts.

Modernize and Migrate your SQL Server Workloads with Azure

Modernizing and migrating SQL Server is just one of the many reasons why a company might want to migrate its data. Other common reasons may include mergers, hardware upgrades or moving to the cloud. In most cases, however, data migrations are associated with downtime, data loss, operational disruptions, and compatibility problems.

With SNP Technologies Inc., these concerns are alleviated, and the migration process is simplified. We help businesses migrate complete workloads seamlessly through real-time, byte-level replication and orchestration. For enhanced agility with little to no downtime, we can migrate data and systems between physical, virtual and cloud-based platforms.

When it comes to the modernization of SQL Server, we can migrate and upgrade your workload simultaneously. Production sources can be lower versions of SQL Server that are then upgraded to newer versions, for example, SQL 2008, 2008R2, and 2012 can be moved to a newer version of Windows and SQL or to Azure.

 

Some key benefits of modernizing or migrating your sql workloads include:

  • Built-in high Availability and disaster recovery for Azure SQL PaaS with 99.99% availability
  • Automatic backups for Azure SQL PaaS services
  • High availability with 99.95% for Azure IaaS
  • Can leverage the azure automatic backups or Azure Backup for SQL Server on Azure VM

Listed below are the various steps SNP follows to migrate an on-premises SQL server to Azure PaaS or IaaS

  • Assessment to determine what is the most appropriate target and their Azure sizing.
  • A performance assessment will be conducted before the migration to determine potential issues with the modernization.
  • A performance assessment will be conducted post-migration to determine if there is any impact on performance.
  • Migration to the designated target.

 

As part of our modernization process, we utilize a variety of tools that Microsoft provides. The following are various tools or services we leverage during the modernization process.

Assessment to determine what is the most appropriate target and their Azure sizing with Azure Migrate:

Azure migrate is a service in Azure that uses Azure SQL Assessment to assess the customer’s on-premises SQL infrastructure. In Azure Migrate, all objects on the SQL server are analyzed against the target (whether it’s Azure SQL Database, Azure SQL Managed Instance or SQL Server on Azure VM) and the target is calculated by considering all performance parameters such as IOPS, CPU, Memory, Costing etc., along with the appropriate Azure size. Following the assessment, SNP gets a better idea of what needs to be migrated, while the assessment report recommends the most appropriate migration solution.

This assessment generates four types of reports:

  • Recommended type (it gives us the best option by comparing all the available options (best fit)– If the SQL server is ready for all the targets, it will give us the best fit considering all the factors like performance, cost, etc
  • Recommendation of instances to Azure SQL MI– It gives the information If the SQL server is ready for MI. If the SQL server is ready for MI, it gives us a target recommendation size. If the Server has any issues with SQL MI, it shows us all the various issues it has and its corresponding recommendations
  • Recommendation of Instances to Azure SQL VM– It will assess individual instance and provides us with the suitable configuration specific to individual instance
  • Recommendation of Servers to SQL Server on Azure VM– If the server is ready to move to the SQL server on Azure, it will give us the appropriate recommendation

Our assessment checks if there are any performance impacts post migration with Microsoft’s data migration assistant

To prepare for modernizing our SQL infrastructure to Azure, we need to know what objects will be impacted post-migration, so we can plan what steps to take post-migration. A second assessment is performed using a Microsoft Data Migration Assistant (DMA) tool to identify all the objects that will be impacted after migration. This tool can be used to determine which objects are going to be impacted post-migration during this phase. The DMA categorizes the objects into five/ four categories for modernizing to SQL Server on Azure VM.

Some key factors considered at this stage include:

  1. Breaking Changes:  These are the changes that will impact the performance of a particular object. Following a migration, we will need to ensure that breaking changes are addressed.
  2. Behavior Changes: There are changes that may impact query performance and should be addressed for optimal results.
  3. Informational issues: We can use this information to identify issues that might affect post-migration
  4. Deprecated Feature: These are the features that are going to be deprecated
  5. Migration blockers: These are the objects that are going to block the migration, either we remove them prior to migration or change them as per the business requirements.

Note: Migration blockers are specific to the Modernization of SQL Server to Azure SQL PaaS

 

Modernization using Azure Data Studio:

Once we have an Azure target along with the Azure size and a list of affected objects, we can move on to modernization, where we migrate our SQL infrastructure to the Azure target. In this phase, the SQL infrastructure is modernized using a tool called Azure Data Studio, which uses an extension called Azure SQL Migration, leveraging Azure Data Migration Service (Azure DMS).

In Azure Data Studio, you will be able to perform a modernization of the SQL server infrastructure by using the Native SQL backups (the latest full back up as well as the transactional log backups from the previous backup). In this method, backup files of SQL server databases are copied and restored on the target. Using Azure Data Studio, we can automate the backup and restore process. All we must do is manually place the backup files into a shared network folder or Azure storage container so that the tool recognizes the backups and restores them automatically.

Post Migration:

Upon completion of modernization, all objects impacted by the modernization should be resolved for optimal performance. DMA provides information regarding all impacted objects and offers recommendations on how to address them.

Your Next Steps:

If you like what you’ve read so far, let’s move forward together with confidence. We’re here to help at every step. Contact SNP’s migration experts here

 

 

Azure Arc enabled Kubernetes for Hybrid Cloud Management — Manage Everything and Anywhere

Azure Arc-enabled Kubernetes extends Azure’s management capabilities to Kubernetes clusters running anywhere, whether in public clouds or on-premises data centers. This integration allows customers to leverage Azure features such as Azure Policy, GitOps, Azure Monitor, Microsoft Defender, Azure RBAC, and Azure Machine Learning.

Key features of Azure Arc-enabled Kubernetes include:

  1. Centralized Management: Attach and configure Kubernetes clusters from diverse environments in Azure, facilitating a unified management experience.
  2. Governance and Configuration: Apply governance policies and configurations across all clusters to ensure compliance and consistency.
  3. Integrated DevOps: Streamline DevOps practices with integrated tools that enhance collaboration and deployment efficiency.
  4. Inventory and Organization: Organize clusters through inventory, grouping, and tagging for better visibility and management.
  5. Modern Application Deployment: Enable the deployment of modern applications at scale across any environment.

In this blog, we will follow a step by step approach and learn how to:

1. Connect Kubernetes clusters running outside of Azure

2. GitOps – to define applications and cluster configuration in source control

3. Azure Policy for Kubernetes

4. Azure Monitor for containers

 

1. Connect Kubernetes clusters

Prerequisites

  • Azure account with an active subscription.
  • Identity – User or service principal
  • Latest Azure CLI
  • Extensions – connectedk8s and k8sconfiguration
  • An up-and-running Kubernetes cluster
  • Resource providers – Microsoft.Kubernetes, Microsoft.KubernetesConfiguration, Microsoft.ExtendedLocation

Create a Resource Group

Create a Resource Group using below command in Azure portal choose your desired location. Azure Arc for Kubernetes supports most of the azure regions. Use this page Azure products by region to know the supported regions.

* az group create –name AzureArcRes -l EastUS -o table

For example: az group create –name AzureArcK8sTest –location EastUS –output table

Connect to the cluster with admin access and attach it with Azure Arc

We use az connectedk8s connect cli extension to attach our Kubernetes clusters to Azure Arc.

This command verify the connectivity to our Kubernetes clusters via kube-config (“~/.kube/config”) file and deploy Azure Arc agents to the cluster into the “azure-arc” namespace and installs Helm v3 to the .azure folder.

For this demonstration we connect and attach AWS – Elastic Kubernetes service and Google cloud – Kubernetes engine. Below, we step through the commands used to connect and attach to each cluster.

 

AWS – EKS

* aws eks –region <Region> update-kubeconfig –name <ClusterName>

* kubectl get nodes

AWS – EKS 2

* az connectedk8s connect –name <ClusterName> –resource-group AzureArcRes

az connectedk8s connect

GCLOUD- GKE

GCloud – GKE

* gcloud container clusters get-credentials <ClusterName> –zone <ZONE> –project <ProjectID>

* kubectl get no

* az connectedk8s connect –name <ClusterName> –resource-group AzureArcRes

az connectedk8s connect

Verify Connected Clusters

* az connectedk8s list -g AzureArcRes -o table

Verify Connected Clusters

Azure Arc

 

2. Using GitOps to define applications & clusters

We use the connected GKE cluster for our example to deploy a simple application.

Create a configuration to deploy an application to kubernetes cluster.
We use “k8sconfiguration” extension to link our connected cluster to an example git repository provided by SNP.

* export KUBECONFIG=~/.kube/gke-config

* az k8sconfiguration create \

–name app-config \

–cluster-name <ClusterName> –resource-group <YOUR_RG_NAME>\

–operator-instance-name app-config –operator-namespace cluster-config \

–repository-url https://github.com/gousiya573-snp/SourceCode/tree/master/Application \

–scope cluster –cluster-type connectedClusters

Check to see that the namespaces, deployments, and resources have been created:

* kubectl get ns –show-labels

We can see that cluster-config namespace have been created.

Azure Arc enabled Kubernetes

* kubectl get po,svc

The flux operator has been deployed to cluster-config namespace, as directed by our sourceControlConfig and application deployed successfully, we can see the pods are Running and Service LoadBalancer IP also created.

Azure Arc enabled Kubernetes

Access the EXTERNAL-IP to see the output page:

Azure Arc enabled Kubernetes

Please Note:

Supported repository-url Parameters for Public & Private repos:

* Public GitHub Repo   –  http://github.com/username/repo  (or) git://github.com/username/repo

* Private GitHub Repo –  https://github.com/username/repo (or) git@github.com:username/repo

* For the Private Repos – flux generates a SSH key and logs the public key as shown below:

Azure Arc enabled Kubernetes

For this demonstration we connect and attach AWS – Elastic Kubernetes service and Google cloud – Kubernetes engine. Below, we step through the commands used to connect and attach to each cluster.

3. Azure Policy for Kubernetes

Use Azure Policy to enforce that each Microsoft.Kubernetes/connectedclusters resource or Git-Ops enabled Microsoft.ContainerService/managedClusters resource has specific Microsoft.KubernetesConfiguration/sourceControlConfigurations applied on it.

Assign Policy:

To create the policy navigate to Azure portal and Policy, in the Authoring section select the Definitions.
Click on Initiative definition to create the policy and search for gitops in the Available Definitions, click on Deploy GitOps to Kubernetes clusters policy to add.
Select the subscription in the Definition locations, Give the Policy assignment Name and Description.

Choose the Kubernetes in the existing Category list and scroll-down to fill the Configuration related details of an application.

Azure Arc

Select the policy definition and click on Assign option above and set the scope for the assignment. Scope can be Azure resource group level or subscription and complete the other basics steps – Assignment name, Exclusions, remediation etc.

Click on parameters and provide name for the Configuration resourceOperator instanceOperator namespace and set the Operator scope to cluster level or namespace, Operator type is Flux and provide your application github repo url (public or private) in the Repository Url field. Now, additionally pass the Operator parameters such as “–git-branch=master –git-path=manifests –git-user=your-username –git-readonly=false” finally click on Save option and see the policy with the given name is created in the Assignments.

Once the assignment is created the Policy engine will identify all connectedCluster or managedCluster resources that are located within the scope and will apply the sourceControlConfiguration on them.

Azure Arc

–git-readonly=false enables the CI/CD for the repo and creates the Auto releases for the commits.

 

Azure Arc enabled Kubernetes

 

Verify a Policy Assignment

Go to Azure portal and click on connected Cluster resources to check the Compliant Status, Compliant: config-agent was able to successfully configure the cluster and deploy flux without error.

Azure Arc enabled Kubernetes

We can see the policy assignment that we created above, and the Compliance state should be Compliant.

Azure Arc

4. Azure Monitor for Containers

It provides rich monitoring experience for the Azure Kubernetes Service (AKS) and AKS Engine clusters. This can be enabled for one or more existing deployments of Arc enabled Kubernetes clusters using az cli, azure portal and resource manager.

Create Azure Log Analytics workspace or use an existing one to configure the insights and logs. Use below command to install the extension and configure it to report to the log analytics workspace.

*az k8s-extension create –name azuremonitor-containers –cluster-name <cluster-name> –resource-group <resource-group> –cluster-type connectedClusters –extension-type Microsoft.AzureMonitor.Containers –configuration-settings logAnalyticsWorkspaceResourceID=<armResourceIdOfExistingWorkspace

It takes about 10 to 15 minutes to get the health metrics, logs, and insights for the cluster. You can check the status of extension in the Azure portal or through CLI. Extension status should show as “Installed”.

Azure Arc enabled Kubernetes

Azure Arc enabled Kubernetes

We can also scrape and analyze Prometheus metrics from our cluster.

Clean Up Resources

To delete an extension:

* az k8s-extension delete –name azuremonitor-containers –cluster-type connectedClusters –cluster-name <cluster-name> –resource-group <resource-group-name>

To delete a configuration:

*az k8sconfiguration delete –name ‘<config name>‘ -g ‘<resource group name>‘ –cluster-name ‘<cluster name>‘ –cluster-type connectedClusters

To disconnect a connected cluster:

* az connectedk8s delete –name <cluster-name> –resource-group <resource-group-name>

 

Conclusion:

This blog provides an overview of Azure Arc-enabled Kubernetes, highlighting how SNP assists its customers in setting up Kubernetes clusters with Azure Arc for scalable deployment. It emphasizes the benefits of Azure Arc in managing Kubernetes environments effectively.

SNP offers subscription services to accelerate your Kubernetes journey, enabling the installation of production-grade Kubernetes both on-premises and in Microsoft Azure. For more information or to get assistance from SNP specialists, you can reach out through the provided contact options. Contact SNP specialists here.

Accelerate Innovation Across Hybrid & Multicloud Environments with Azure Arc

With the growing trend of multicloud and edge computing, organizations are increasingly finding themselves managing a diverse array of applications, data centers, and hosting environments. This heterogeneity presents significant challenges in managing, governing, and securing IT resources. To address these complexities, organizations need a robust solution that enables them to centrally inventory, organize, and enforce control policies across their entire IT estate, regardless of location.

SNP leverages Azure Arc and a hybrid approach to empower its customers to effectively manage resources deployed in both Azure and on-premises environments through a unified control plane. With Azure Arc, organizations can simplify their infrastructure management, making it easier to accelerate migration decisions driven by policies while ensuring compliance with regulatory requirements.

Microsoft Azure enables management of a variety of services deployed externally, including:

  • Windows and Linux servers: These can run on bare metal, virtual machines (VMs), or public cloud IaaS environments.
  • Kubernetes clusters: Organizations can manage their containerized applications seamlessly across different environments.
  • Data services: Azure Arc supports data services based on SQL Azure and PostgreSQL Hyperscale, allowing for consistent data management practices.
  • Microservices applications: Applications packaged and deployed as microservices running on Kubernetes can be easily monitored and managed through Azure Arc.

 

Hybrid Unified Management & How it Benefits your Business

Azure Arc involves deploying an agent on servers or on Kubernetes clusters for resources to be projected on the Azure Resource Manager. Once the initial connectivity is done, Arc extends governance controls such as Azure Policy and Azure role based access controls across a hybrid infrastructure. With Azure governance controls, we can have consistency across environments which helps enhance productivity and mitigate risks.

Some key benefits of Azure Arc include:

  • Azure Arc enabled solutions can easily expand into a Hybrid-cloud architecture as they are designed to run virtually anywhere.
  • Azure Arc data includes technical and descriptive details, along with compliance and security policies.
  • Enterprises can use Azure security center to ensure compliance of all resources registered with Azure Arc irrespective of where they are deployed. They can quickly patch the operating systems running in VMs as soon as  vulnerability is found. Policies can be defined once and automatically applied to all the resources across Azure, data center and even VMs running in other cloud platforms.
  • All the resources registered with Azure Arc send the logs to the central, cloud based Azure monitor. This is a comprehensive approach in deriving insights for highly distributed and disparate infrastructure environments.
  • Leveraging Azure Automation, mundane to advanced maintenance operations services across the public, hybrid or multi-cloud environments can be performed effortlessly.

 

Azure services for support management and governance of other cloud platforms. includes:

  • Azure Active Directory
  • Azure Monitor
  • Azure Policy
  • Azure Log Analytics
  • Azure Security Center/Defender
  • Azure Sentinel

 

Unified Kubernetes Management

With AKS and Kubernetes, Azure Arc provides the ability to deploy and configure Kubernetes applications in a consistent manner across all environments, adopting modern DevOps techniques. This offers:

Flexibility

  • Container platform of your choice with out-of-the-box support for most Cloud native applications.
  • Used across Dev, Test and Production Kubernetes clusters in your environment.

Management

  • Inventory, organise and tag Kubernetes clusters.
  • Deploy apps and configuration as code using GitOps.
  • Monitor and Manage at scale with policy-based deployment.

Governance and security

  • Built in Kubernetes Gatekeeper policies.
  • Apply consistent security configuration at scale.
  • Consistent cluster extensions for Monitor, Policy, Security, and other agents

Role-based access control

  • Central IT based at-scale operations.
  • Management by workload owner based on access privileges.

Leveraging GitOps

  • Azure Arc also lets us organize, view, and configure all clusters in Azure (like Azure Arc enabled servers) uniformly, with GitOps (Zero touch configuration).
  • In GitOps, the configurations are declared and stored in a Git-repo and Arc agents running on the cluster continuously monitor this repo for updates or changes and automatically pulls down these changes to the cluster.
  • We can use cloud native tools practices and GitOps configuration and app deployment to one or more clusters at scale.

 

Azure Arc Enabled Data Services

Azure Arc makes it possible to run Azure data services on-premises, at the edge, and 3rd party clouds using Kubernetes on hardware of our choice. 

Arc can bring cloud elasticity on-premises so you can optimize performance of your data workloads with the ability to dynamically scale, without application downtime. By connecting to Azure, one can see all data services running on-premises alongside those running in Azure through a single pane of glass, using familiar tools like Azure Portal, Azure Data Studio and Azure CLI.

Azure Arc enabled data services can run Azure PostgreSQL or SQL managed instance in any supported Kubernetes environment in AWS or GCP, just the way it would run it in an on-prem environment.

With the of Azure Arc, organizations can reach, for hybrid architectures, the following overall business objectives:

  • Standardization of operations and procedures
  • Organization of resources
  • Regulatory Compliance and Security
  • Cost Management
  • Business Continuity and Disaster Management

 

For more on how you can revolutionize the management and development of your hybrid environments with Azure Arc,

SNP’s Managed Detect & Response Services Powered by Microsoft Sentinel & Defenders (MXDR)

SNP’s Managed Detection and Response (MDR) for Microsoft Sentinel service, brings integrations with Microsoft services like Microsoft Defenders (MXDR), threat intelligence and customer’s hybrid and multi-cloud infrastructure to monitor, detect and respond to threats quickly. With our managed security operations team, SNP’s threat detection experts help identify, investigate and provide high fidelity detection through ML-based threat modelling for your hybrid and multicloud infrastructure.

SNP’s MXDR Services Entitlements:

SNP’s Managed services security framework brings the capability of centralized security assessment for managing your on-premises or cloud infrastructure, where we offer:

 

Leveraging SNP’s security model below, we help our customers:

  • Build their infrastructure and applications with cloud-native protection throughout their cloud application lifecycle.
  • With defined workflows, customers get the ease of separating duties in entitlements management to protect against governance and compliance challenges.
  • Data security is prioritized to protect sensitive data from different data sources to the point of consumption.
  • With Azure Sentinel, we consolidate and automate telemetry across attack surfaces while orchestrating workflows and processes to speed up response and recovery.

 

SNP’s Managed Extended Detection & Response (MXDR) Approach:

Our 6-step incident response approach helps our customers maintain, detect, respond, notify, investigate, and remediate cyberthreats as shown below:

 

For more on SNP’s Managed Detect & Response Services Powered by Microsoft Sentinel & Defenders (MXDR), contact our security experts here.

Bring your Data Securely to the Cloud by Implementing Column Level security, Row Level Security & Dynamic Data Masking with Azure Synapse Analytics

Azure Synapse Analytics from Microsoft is a limitless analytics service that brings together data integration, enterprise data warehousing, and big data analytics. SNP helps its customers migrate their legacy data warehouse solutions to Azure Synapse Analytics to gain the benefits of an end-to-end analytics platform that provides high availability, security, speed, scalability, cost savings, and industry-leading performance for enterprise data warehousing workloads.

A common business scenarios we cover:

As organizations scale, data grows exponentially. And with the workforce working remotely, data protection is one of the primary concerns of organizations around the world today. There are several high-level security best practices that every enterprise should adopt, to protect their data from unauthorized access. Here are our recommendations to help you prevent unauthorized data access.

The SNP solution:

With Azure Synapse Analytics, SNP provides its customers enhanced security with column level security, row-level security & dynamic data masking.

Azure Synapse SecurityBelow is an example of a sample table data which is required to implement the column level security, row-level security & dynamic data masking for your data.

Revenue table:

Azure Synapse Security

Codes:

Step:1 Create users

create user [CEO] without login;

create user [US Analyst] without login;

create user [WS Analyst] without login;

 

Column Level Security

A column-level security feature in Azure Synapse simplifies the design and coding of security in applications. It ensures column-level security by restricting column access to protect sensitive data.

In this scenario, we will be working with two users. The first one is the CEO, who needs access to all company data. The second one is an Analyst based in the United States, who does not have access to the confidential Revenue column in the Revenue table.

Follow this lab, one step at a time to see how Column-level security removes access to the revenue column to US Analyst.

 

Step:2 Verify the existence of the “CEO” and “US Analyst” users in the Data Warehouse.

SELECT Name as [User1] FROM sys.sysusers WHERE name = N’CEO’;

SELECT Name as [User2] FROM sys.sysusers WHERE name = N’US Analyst’;

 

Step:3 Now let us enforce column-level security for the US Analyst.

The revenue table in the warehouse has information like Analyst, CampaignName, Region, State, City, RevenueTarget, and Revenue. The Revenue generated from every campaign is classified and should be hidden from US Analysts.

REVOKE SELECT ON dbo.Revenue FROM [US Analyst];

GRANT SELECT ON dbo.Revenue([Analyst], [CampaignName], [Region], [State], [City], [RevenueTarget]) TO [US Analyst];

Azure Synapse SecurityThe security feature has been enforced,  where the following query with the current user as ‘US Analyst’, this will result in an error. Since the US Analyst does not have access to the Revenue column the following query will succeed since we are not including the Revenue column in the query.

Azure Synapse SecurityAzure Synapse Security

Row Level Security

Row-level Security (RLS) in Azure Synapse enables us to use group membership to control access to rows in a table. Azure Synapse applies the access restriction every time data access is attempted from any user.

In this scenario, the revenue table has two Analysts, US Analysts & WS Analysts. Each analyst has jurisdiction across a specific Region. US Analyst on the South East Region. An Analyst only sees the data for their own data from their own region. In the Revenue table, there is an Analyst column that we can use to filter data to a specific Analyst value.

SELECT DISTINCT Analyst, Region FROM dbo.Revenue order by Analyst ;

Review any existing security predicates in the database

SELECT * FROM sys.security_predicates

 

Step:1

Create a new Schema to hold the security predicate, then define the predicate function. It returns 1 (or True) when a row should be returned in the parent query.

CREATE SCHEMA Security

GO

CREATE FUNCTION Security.fn_securitypredicate(@Analyst AS sysname)

RETURNS TABLE

WITH SCHEMABINDING

AS

RETURN SELECT 1 AS fn_securitypredicate_result

WHERE @Analyst = USER_NAME() OR USER_NAME() = ‘CEO’

GO

Step:2

Now we define a security policy that adds the filter predicate to the Sale table. This will filter rows based on their login name.

CREATE SECURITY POLICY SalesFilter 

ADD FILTER PREDICATE Security.fn_securitypredicate(Analyst)

ON dbo.Revenue

WITH (STATE = ON);

Allow SELECT permissions to the Sale Table.

GRANT SELECT ON dbo.Revenue TO CEO, [US Analyst], [WS Analyst];

 

Step:3

Let us now test the filtering predicate, by selecting data from the Sale table as ‘US Analyst’ user.

Azure Synapse SecurityAs we can see, the query has returned rows here. Login name is US Analyst and Row-level Security is working.

Azure Synapse Security

Azure Synapse Security

Dynamic Data Masking

Dynamic data masking helps prevent unauthorized access to sensitive data by enabling customers to designate how much of the sensitive data to reveal with minimal impact on the application layer. DDM can be configured on designated database fields to hide sensitive data in the result sets of queries. With DDM the data in the database is not changed. Dynamic data masking is easy to use with existing applications since masking rules are applied in the query results. Many applications can mask sensitive data without modifying existing queries.

In this scenario, we have identified some sensitive information in the customer table. The customer would like us to obfuscate the Credit Card and Email columns of the Customer table to Data Analysts.

Let us take the below customer table:

Azure Synapse SecurityConfirmed no masking enabled as of now,

Azure Synapse Security

Let us make masking for Credit card & email information,

Step:1

Now let us mask the ‘CreditCard’ and ‘Email’ Column of the ‘Customer’ table.

ALTER TABLE dbo.Customer 

ALTER COLUMN [CreditCard] ADD MASKED WITH (FUNCTION = ‘partial(0,”XXXX-XXXX-XXXX-“,4)’);

GO

ALTER TABLE dbo.Customer

ALTER COLUMN Email ADD MASKED WITH (FUNCTION = ’email()’);

GO

 

Now, the results show masking enabled for data:

Azure Synapse SecurityExecute query as User ‘US Analyst’, now the data of both columns is masked,

Azure Synapse SecurityUnmask data:

Azure Synapse Security

Conclusion:

From the above samples, SNP has shown how column level security, row level security & dynamic data masking can be implemented in different business scenarios. Contact SNP Technologies for more information.

Top 5 FAQs on Operationalizing ML Workflow using Azure Machine Learning

Enterprises today are adopting artificial intelligence (AI) at a rapid pace to stay ahead of their competition, deliver innovation, improve customer experiences, and grow revenue. However, the challenges with such integrations is that the development, deployment and monitoring of these models differ from the traditional software development lifecycle that many enterprises are already accustomed to.

Leveraging AI and machine learning applications, SNP helps bridge the gap between the existing state and the ideal state of how things should function in a machine learning lifecycle to achieve scalability, operational efficiency, and governance.

SNP has put together a list of the top 5 challenges enterprises face in the machine learning lifecycle and how SNP leverages Azure Machine Learning to help your business overcome them.

Q1. How much investment is needed on hardware for data scientists to run complex deep learning algorithms?

By leveraging Azure Machine Learning workspace, data scientists can use the same hardware virtually at a fraction of the price. The best part about these virtual compute resources is that businesses are billed based on the amount of resources consumed during active hours thereby reducing the chances of unnecessary billing.

Q2: How can data scientists manage redundancy when it comes to training segments and rewriting existing or new training scripts that involves collaboration of multiple data scientists?  

With Azure data pipelines, data scientists can create their model training pipeline consisting of multiple loosely coupled segments which are reusable in other training pipelines. Data pipelines also allows multiple data scientists to collaborate on different segments of the training pipeline simultaneously, and later combine their segments to form a consolidated pipeline.

Q3. A successful machine learning life cycle involves a data scientist finding the best performing model by using multiple iterative processes. Each process involves manual versioning which results to inaccuracies during deployments and auditing. So how best can data scientists manage version controlling?

Azure Machine Learning workspace for model development can prove to be a very useful tool in such cases. It tracks performance metrics and functional metrics of each run to provide the user with a visual interface on model performance during training. It can also be leveraged to register models developed on Azure Machine Learning workspace or models developed on your local machines for versioning. Versioning done using Azure Machine Learning workspace makes the deployment process simpler and faster.

Q4. One of the biggest challenges while integrating the machine learning model with an existing application is the tedious deployment process which involves extensive manual effort. So how can data scientists simplify the packaging and model deployment process?

Using Azure Machine Learning, data scientists and app developers can easily deploy Machine Learning models almost anywhere. Machine Learning models can be deployed as a standalone endpoint or embedded into an existing app or service or to Azure IoT Edge devices.

Q5. How can data scientists automate the machine learning process?

A data scientist’s job is not complete once the Machine Learning model is integrated into the app or service and deployed successfully. It has to be closely monitored in a production environment to check its performance and must be re-trained and re-deployed once there is sufficient quantity of new training data or when there are data discrepancies (when actual data is very different from the data on which your model is trained on and is affecting your model performance).

Azure Machine Learning can be used to trigger a re-deployment when your Git repository has a new code check-in. Azure Machine Learning can also be used to create a re-training pipeline to take new training data as input to make an updated model. Additionally, Azure Machine Learning provides alerts and log analytics to monitor and govern the containers used for deployment with a drag-drop graphical user interface to simplify the model development phase.

Start building today!

SNP is excited to bring you machine learning and AI capabilities to help you accelerate your machine learning lifecycle, from new productivity experiences that make machine learning accessible to all skill levels, to robust MLOps and enterprise-grade security, built on an open and trusted platform helping you drive business transformation with AI. Contact SNP here.

Azure’s Software Defined Connectivity — Virtual WAN

The hybrid cloud network consists of both physical and virtualized technologies to provide connectivity across Cloud, private data centers, on-premises, and branch offices. To help customers with their massive modernization efforts, SNP leverages the Azure Virtual WAN to build and deploy applications while simplifying branch connectivity. 

Azure Virtual WAN:

Azure’s Virtual WAN is software-defined connectivity that allows you to take advantage of optimized and automated branch connectivity on a global scale with Azure. Virtual WAN provides a better networking experience by seamlessly connecting branches to Azure with SDWAN & VPN devices (i.e., Customer Premises Equipment or CPE) with built-in ease of use and configuration management. It also provides security and routing functionalities to provide a single operational interface.

  • Branch connectivity (via connectivity automation from Virtual WAN Partner devices such as SD-WAN or VPN CPE).
  • Site-to-site VPN connectivity.
  • Remote user VPN connectivity (point-to-site).
  • Private connectivity (ExpressRoute).
  • Intra-cloud connectivity (transitive connectivity for virtual networks).
  • VPN ExpressRoute inter-connectivity.
  • Routing, Azure Firewall, and encryption for private connectivity.

 

How it works:

Traffic from branches goes into Microsoft’s network at the Microsoft edge site which is closest to a given branch office. Currently, there are 130 of these sites in the Microsoft global network. Once traffic is within the network, it can terminate one of your Virtual WAN’s virtual hubs. 

 

Azure’s Virtual WAN offers benefits like:

  • Integrated connectivity solutions in hub and spoke: Automate site-to-site configuration and connectivity between on-premises branch office and an Azure hub.
  • Automated spoke setup and configuration: Connect virtual networks and workloads to the Azure hub seamlessly.
  • Intuitive troubleshooting: Ability to see the end-to-end flow within Azure, and then use this information to take required actions.
  • Massive scalability with software-defined connectivityConnect global branch offices, point-of-sale locations, and sites using Azure and the Microsoft global network.
  • Optimize security and agility: Leverage secure transport network services and integrated firewall capabilities to ensure the secure delivery of all applications across your hybrid enterprise. Securely identify and manage the performance of today’s modern and encrypted applications running over SSL, TLS, and HTTPS.
  • One place for managing your network: Quickly respond to the needs of your business with application-centric, business intent-based policies that are centrally managed and applied network-wide across all remote locations.
  • Reduced costs: Maximize the use of redundancy and lower-cost connectivity options with zero-touch provisioning and centralized management to reduce the cost of deploying and maintaining a hybrid WAN architecture.
  • Reliability: Create a highly available WAN architecture that virtualizes and dynamically leverages multiple links at remote locations. Retain end-to-end visibility of network performance and end-user experience for troubleshooting and problem resolution.
  • Performance: Deliver superior application performance to your business with the industry-leading WAN optimization solution from SNP.

 

For more information on Azure Virtual WAN, contact SNP Technologies here.

Ensure PaaS Resources Are Private in Your Hybrid Cloud

Use a secure hub-spoke network architecture and Azure Policies to enforce the use of Private Endpoints in a hub’s centralized, private DNS zone.

Security is a leading concern as enterprises adopt hybrid cloud strategies and a challenging one at that. At SNP Technologies, we have hybrid security solutions to meet the stringent security requirements of our customers.

In this article, we highlight the scenario wherein the organization has adopted Azure managed resources, such as Azure SQL Database and Azure App Service, in their hybrid cloud solution architecture. These so-called “platform-as-a-services” resources (or PaaS for short) are exposed to the public internet by default.

Hence, the challenge is how to reign in the PaaS resources, so their traffic only flows over the organization’s private network. The solution entails the integration of DNS zones with private endpoints and the use of government policies to enforce the security configuration for each PaaS resource added to the network.

First, we discuss a recommended network architecture to fulfill this requirement. Then we provide examples of governance policies designed by SNP that enforce secure practices for private IP range integration and name resolution. These methods solve many hybrid cloud solution architecture concerns, like:

  • Configuring a Hub & Spoke network model with an Azure private DNS zone
  • Handling the redirect of DNS queries originating from on-premises to an Azure private DNS zone via a private IP
  • Providing an Azure Virtual Network private IP for Azure managed (PaaS) resources (e.g., Azure SQL, App Service)
  • Connecting Azure PaaS resources to Azure private DNS zones for DNS resolution
  • Blocking public endpoints on Azure PaaS resources
  • Deploying PaaS resources on different subscriptions within the same tenant

Networking Solution

Figure 1 illustrates the architecture designed by SNP engineers to secure a hybrid cloud having PaaS resources. This example has an Azure SQL database and the architecture features:

  1. For the on-premises network, the Active Directory DNS servers are configured with conditional forwarders for each private endpoint public DNS zone, such as *.database.windows.net* and *.windows.net*. These are then pointed to the DNS server hosted in the Hub VNet in Azure.
  2. The DNS server hosted in the hub VNet on Azure uses the Azure-provided DNS resolver (168.63.129.16) as a forwarder.
  3. The virtual network used as a hub VNet is linked to the Private DNS zone for Azure services names, such as privatelink.database.windows.net.
  4. The spoke virtual network is only configured with hub VNet DNS servers and will send requests to DNS servers.
  5. When the DNS servers hosted on Azure VNet are not the authoritative Active Directory domain names, conditional forwarders for the private link domains are set up on on-premises DNS servers pointing to the azure DNS forwarders.

Figure 1

 

Governance Solution

A ensure private networking for PaaS resources, the following conditions should be met:

  • The PaaS resource has a private endpoint, not a public endpoint
  • A DNS record for the PaaS resource is entered in the central, private DNS zone for the entire network

Below we describe three policies that work together to ensure these conditions are met.

Please note that the policies are customized and not built-in Azure policies (e.g. Azure Policy samples). In the list of resources provided at the end of this article is a link to a tutorial on how to create a custom policy definition in Azure.

Policy 1: Disable public endpoint for PaaS services

Why: Access to endpoints are by default accessible over the public internet.

How: This policy prevents users from creating Azure PaaS services with public endpoints and invokes an error if the private endpoint is not configured at resource creation.

Note: In Azure, the resource that enables the private endpoint is Azure Private Link. Please refer to the Resources section at the end of this article for links to related Azure documentation.

Figure 2 depicts the Azure Portal screen when the policy criteria is not met:

1. Validation fails because of the governance policy

2. Error Details indicate the Azure Policy that disallows the Public Endpoint creation

3. In the Networking section we see that “Private endpoint” setting is set to “None”

4. Once the Private endpoint is added, the policy validation passes (Figure 3)

Figure 2

Networking Solution for Ensure PaaS Resources Are Private in Your Hybrid Cloud

 

Figure 3

Networking Solution for Ensure PaaS Resources Are Private in Your Hybrid Cloud

Policy 2: Deny creation of a private DNS zone with a Private Link prefix

Why: By default, when you create a private endpoint, a private DNS zone is created on each spoke subscription.

As a centralized DNS with a conditional forwarder and private DNS zones is used in our architecture, we need to prevent the user from creating their own Private Link, private DNS zones for each new resource added to the network. If ungoverned, sprawl would occur.

How: This policy prevents creation of a private DNS zone with a Private Link prefix in the spoke subscriptions. With Policy 3 that follows, we associate the newly created resource with a central, private DNS zone already in the hub.

Figure 4 shows the Azure Portal screen when the policy criteria is not met, and user tries to deploy a DNS zone for a Private Link.

1. Deployment fails due to policy

2. Error Details shows the Azure Policy that denied creation of resource and the reason

Figure 4

Networking Solution for Ensure PaaS Resources Are Private in Your Hybrid Cloud

To avoid the deployment error, during resource creation, users must set the “Integrate with private DNS zone” to “No” (Figure 5).

Figure 5

Networking Solution for Ensure PaaS Resources Are Private in Your Hybrid Cloud

 

If the user tries to create a private endpoint with Private link integration, then the policy will deny creation of the resource during validation as depicted in Figure 6, the Azure Portal resource creation screen when the “Integrate with DNS private zone?” setting is set to “Yes”.

1. Integrate with Private DNS Zone is set to “Yes”.

2. Error details reference the policy that denied creation of resource, and reason.

Figure 6

Networking Solution for Ensure PaaS Resources Are Private in Your Hybrid Cloud

 

Figure 7 depicts the Azure Portal screen when the “Integrate with DNS private zone?” setting is set to “No”.

3. The setting is observed in the Networking configuration

4. Policy validation passes

Figure 7

Networking Solution for Ensure PaaS Resources Are Private in Your Hybrid Cloud

Policy 3: “Deploy If Not Exists” policy to automate DNS entries

Why: As described above, since the “Integrate with DNS private zone?” setting is set to “No”, a DNS zone for the Private Link is not created. Therefore, we need to have a method to integrate the Private Link with the centralized DNS zone of the hub. Out of the box, Azure does not provide this option during resource creation.

How: We use a Remediation policy to automate the DNS entry. Within Azure, resources that are non-compliant to a deployIfNotExists policy can be put into a compliant state through Remediation.

The Azure portal screen captures below depict the policy remediation plan:

1. In Figure 8 we see the policy to remediate. The Remediation task is to automatically  add the Azure Resource DNS record to the central private DNS zone.

2. Figure 9 shows that the remediation policy successfully added the DNS entries on the private DNS zone for the respective Private Link DNS records.

Figure 8

Networking Solution for Ensure PaaS Resources Are Private in Your Hybrid Cloud

 

Figure 9

Networking Solution for Ensure PaaS Resources Are Private in Your Hybrid Cloud

 

Conclusion

In this article we have shown how one can securely deploy Azure PaaS resources with private endpoints. While thoughtful hybrid network planning is a given, Azure governance is an ingredient for success that is often overlooked. We hope you explore the resources provided below to learn more about Azure Private Link, how DNS in Azure is managed and how Azure Policy can automate the governance of resource creation once the network and security foundation is in place. Contact SNP Technologies here.

Resources

Simplify Cloud Security Across Hybrid & Multicloud with Azure Arc

Cloud infrastructure usage has seen tremendous growth in the past few years. As an established Microsoft Gold Partner, SNP is in a unique position where we help our customers build and manage their Cloud platform securely.

Leveraging Microsoft Azure,  we are blurring the lines between the traditional categories of platform and management as we deliver an open cloud platform that has built-in security and operations management – and can still meet the needs of our large enterprise customers.

Some of the key features that can help you monitor, secure, and manage your hybrid cloud with the broad built-in security and management capabilities are:

Azure Governance and Compliance: 

The Azure governance features help implement governance across environments, helps in creating hierarchies, applying Azure policies, creating blueprints, inventory management and optimize cost using Azure Cost management.

Azure Cost Management:

Cost management is a critical concern for many businesses, but with this feature now available for customers and partners for free, Azure spend can be managed and optimized seamlessly across Azure, AWS, and Google Cloud Platforms.

Microsoft Defender for Cloud for Hybrid Workloads:

Microsoft Defender for Cloud helps you protect all workloads running in Azure, on-premises and in other cloud platforms from cyber threats. With the recent release of new capabilities, customers can better detect and defend against advanced threats, automate and orchestrate security workflows, and streamline the investigation of threats.

Azure Auto Manage for Virtual Machines:

This feature simplifies the process of the entire VM life cycle by enrolling services like Microsoft Defender for Cloud, VM inventory, backup, VM insights, update management, change tracking, DSC, guest configuration, and more to your existing virtual machines.

End-to-End Monitoring of Applications & Infrastructure:

The new Azure monitor user experience centralizes the monitoring services together, so that you can get visibility across your infrastructure and applications. In addition, the application insights feature has been further optimized for application performance monitoring and failure diagnostics in applications.

Azure Arc – Hybrid Workload Management:

Customers can now manage their hybrid server infrastructure located on-premises or another cloud platform (AWS, Google, etc.). Azure Arc can deliver the following products and features for hybrid servers- Inventory with single plane of management experience, update management, Azure policies, Microsoft Defender for Cloud, integration of device logs with Sentinel, Azure automation, track configuration changes, auto manage for Arc enabled servers, efficiently manage Windows and Linux virtual machines in Azure, and across hybrid environments.

For more details of information on Hybrid Cloud Security & Management, contact an SNP representative