The best way to get access is by using azure.identity.InteractiveBrowserCredential in the azure-identity package. This opens the browser and prompts the user to login. This can then be used to instantiate a azure.storage.filedatalake.DataLakeServiceClient in azure-storage-file-datalake. With this we can upload files.
Then, to kick off ADF jobs, we can use the same credential and pass it to azure.mgmt.datafactory.DataFactoryManagementClient in azure-mgmt-datafactory. As a nice little aside, you can use webbrowser in the web-browser package that will open your browser to the ADF job by crafting a URL with the job ID so you can monitor its progress.
This all seemed pretty straight forward but accessing an Azure SQL instance was somewhat harder. To access the credential, I referred to this SO answer. Here I used the InteractiveBrowserCredential as before but then I needed:
token = credential.get_token("https://database.windows.net/.default")
connection_string = f"""
Driver={Driver};
Server={server};
Database={database};
Encrypt=yes;TrustServerCertificate=no;Connection Timeout=30;
"""
token_bytes = token.token.encode("UTF-16-LE")
token_struct = struct.pack(f'<I{len(token_bytes)}s', len(token_bytes), token_bytes)
conn = pyodbc.connect(connection_string, attrs_before = { 1256:token_struct });
which is a bit more involved but allowed me to connect.
No comments:
Post a Comment