Skip to main content
Filter by
Sorted by
Tagged with
0 votes
2 answers
106 views

Azure Synapse Apache Spark Pools: .gz package added but notebook run error says not found

I have notebooks that contain r code. On their own, they work fine when run manually. To schedule and automate these workbooks, we have to use pipelines to call the r notebooks. However, pipelines don'...
CFCJB John's user avatar
0 votes
1 answer
216 views

Is there a way for notebooks in synapse to take in pipeline variables?

new to this and apologise for any frustrations. I am developing a notebook which can handle api requests however I want to use one notebook which can intake the pipeline variables so the same notebook ...
MahiC98's user avatar
  • 35
0 votes
1 answer
63 views

Power BI embedded report is taking more than 30 min in Azure Synapse notebook

I am running the following code in an Azure Synapse notebook. The report._embedded=True line runs quickly, but retrieving the pages takes more than 30 minutes, which is not acceptable in a production ...
user1702932's user avatar
0 votes
1 answer
85 views

Azure Synapse Sparkr Notebook: How to load a .yaml file holding credentials from ADLS Gen2 directory

Is it possible to load a .yaml file into a Azure Synapse Sparkr Notebook? How do I load a .yaml file holding credentials from ADLS Gen2 directory?
CFCJB John's user avatar
0 votes
1 answer
74 views

parse SQL script via python to a table for field, name, and table it comes from

I am trying to create a helpful table because we have a lot of raw tables and views with business definitions. for example the design for my v_ods_ACT_Master would be SELECT MBACCT AS [Account Number],...
snow's user avatar
  • 1
0 votes
1 answer
106 views

Unable to Drop a View in Serverless SQL Pool from Synapse Notebook

I am trying to drop a view from my synpase Notebook using a connection to a Serverless SQL Pool. I followed the instructions from this post: Access our built-in serverless SQL pool from our notebook ...
Moody_girl's user avatar
1 vote
1 answer
504 views

Access our built-in serverless SQL pool from our notebook

We are currently trying to access our built-in serverless SQL pool from our notebook. Our goal is to be able to drop views from the notebook, which is something we need to put into our pipeline as per ...
Moody_girl's user avatar
1 vote
1 answer
96 views

How to Connect Synapse Notebook to SQL Serverless Pool

I'm working with Azure Synapse Analytics and I need to connect my Synapse Notebook to my SQL Serverless Pool to drop and create views directly within the notebook. However, when I try to run the DROP ...
Moody_girl's user avatar
0 votes
1 answer
146 views

Parquet file not overwriting in azure synapse notebooks

I am working on azure synapsse analytics notebooks and I want to I load Parquet files into DataFrames, perform some transformations, and then overwrite the original files with the transformed data. ...
Moody_girl's user avatar
0 votes
1 answer
145 views

ADF Web Activity Error: The request content is not valid and could not be deserialized: 'After parsing a value an unexpected character was encountered

I am getting the error message in web Activity of ADF: The request content is not valid and could not be deserialized: 'After parsing a value an unexpected character was encountered: Further looking ...
Govind Gupta's user avatar
  • 1,695
0 votes
1 answer
208 views

How to Copy Files into Individual Folders in Synapse Analytics

I originally had files in my silver folder. I was using notebooks to make transformations to these files and then overwriting the original files with the transformed ones. After speaking to a Synapse ...
Moody_girl's user avatar
0 votes
1 answer
220 views

How to Overwrite a Parquet File in the Same Location Using PySpark

I'm working with PySpark within Synapse notebooks and I need to load a Parquet file into a DataFrame, apply some transformations (e.g., renaming columns), and then save the modified DataFrame back to ...
Moody_girl's user avatar
0 votes
1 answer
86 views

DataFrame to filter rows having Special Characters

I have a DataFrame having around 50K to 100K rows This DataFrame has around 4 columns. We need to filter the DataFrame to discard any special character rows 199 Central Avenue 1664 O'block Road 1630 ...
Nanda's user avatar
  • 61
0 votes
1 answer
294 views

Databricks SQL Query for finding rows having special characters and discard those rows

As part of Databricks, We would like to filter rows having special characters in the columns. Let's say we have a Table with data like: Table1 has Col1 199 Central Avenue 1664 O'block Road 1630 Hahn'...
Nanda's user avatar
  • 61
1 vote
2 answers
136 views

Using spaCy in Microsoft Fabric

Our organisation is trailing MS Fabric, and i'm trying out Notebooks. I've managed to create an environment and brought in NLTK and spaCy libraries. However, typically when you use spaCy you also ...
GlassShark1's user avatar
0 votes
1 answer
122 views

Storing a View in a Dataframe in Azure Synapse Notebook

I'm attempting to store the top 100 rows of a view from an SQL database to a dataframe using Synapse Notebook. I have tried using this code: %%pyspark df = spark.sql("SELECT DataType, TableName, ...
Moody_girl's user avatar
2 votes
1 answer
414 views

How to use UAMI authentication in Azure synapse spark notebook

We can authenticate ADLS Gen2 source using User-Assigned Managed Identity (UAMI) to authenticate to an ADLS Gen2 source successfully in an Azure Synapse Pipeline. However, we are unable to connect it ...
user24568243's user avatar
0 votes
2 answers
147 views

ModuleNotFoundError: No module named 'azure.mgmt.eventhub' on synapse notebook even though it is in requirements.txt

We have the azure-management library defined in the requirements.txt: azure-mgmt-eventhub==11.0.0 But when trying to import the library it is not found: This has been intermittent for other ...
WestCoastProjects's user avatar
0 votes
1 answer
45 views

Migrate multiple notebooks from current environment to new environment

I am trying to migrate multiple notebooks from current environment to new UC enabled environment. I am looking for a solution where I can migrate groups of notebooks or a each folder of notebooks from ...
bharath kumar's user avatar
1 vote
1 answer
1k views

Databricks SQL query for finding non alphanumeric values in a column

I tried finding many question in this forum but couldn't find specific question to Databricks SQL, so posting here, any help or link to existing question would be appreciated. How to find all the ...
Mohammed Arif's user avatar
0 votes
1 answer
42 views

Pyspark Combine Columns in different rows to a single row order by another Column

I have a dataframe which have 2 Columns CLMN_SEQ_NUM and CLMN_NM. I am tryng to combine the Columns CLMN_NM to a single row comma separated. Desired o/p PR_NAME,PR_ID,PR_ZIP,PR_ADDRESS,PR_COUNTRY ...
newbie's user avatar
  • 55
0 votes
2 answers
836 views

Pyspark - %Run magic command in Synapse Notebook: pass dynamic parameter

In synapse notebook, I'm using the %run magic command to running a notebook and it is running well: %run "5 Bronze to silver/Daily/NB_B2S" { "exclude_bronze_notebook_names": '[&...
coding's user avatar
  • 167
-1 votes
1 answer
207 views

Corrupt Synapse Notebook

I have a couple of Synapse Note books. When I try to delete them and publish I get the following Error. As the Error suggested I tried renaming and publishing. Still doesnt let me do it. Is it ...
Developer's user avatar
0 votes
1 answer
207 views

How to convert Json file data into Binary base64 format using ADF or Notebook?

I have an requirement to convert the source JSON file data into Binary format base6. I tried with Copy activity/binary dataset or using dataflow but converting the complete file data is not possible. ...
Nezko1's user avatar
  • 25
0 votes
1 answer
1k views

Cast Date and Timestamp in Pyspark

I have an i/p file which is coming in a csv format where date and timestamp are coming as String in the fromat mm/dd/yy and yyyy-mm-dd-hh.mm.ss.SSSSSS. I am writing a parquet file and converting to ...
newbie's user avatar
  • 55
0 votes
1 answer
419 views

ADF for each child item results in duplication of parameters

I have a ADF pipeline in which I use the get metadata activity to get all the files that were modified today followed by a foreach activity that runs a notebook activity for each modified file. The ...
GLin's user avatar
  • 3
0 votes
1 answer
62 views

pyspark parsing complex json

I'm new to pyspark and I'm having issues converting a JSON string that is returned from an API call to a dataframe. I've tried a variety of solutions found online, with no success so far. With just ...
Scott S's user avatar
  • 31
0 votes
2 answers
677 views

Read "Integrated Dataset" into a Dataframe with Azure Synapse Notebook

I know how to read a file in the Azure Data Lake Storage, but I don't know how to read a file that is listed under the Integration Dataset section in Azure.
Bama's user avatar
  • 11
1 vote
1 answer
581 views

Access the cell outputs from a synapse notebook using a (python) synapse api

We need to perform post-processing on all pipeline runs including evaluating the output of each cell. The intent is to audit the cell outputs directly - not to generate emails or other new artifacts....
WestCoastProjects's user avatar
0 votes
1 answer
623 views

Spark SQL databricks Create Table using CSV Options Documentation

Do you know where is the proper documentation for Spark SQL using databricks? For example I wish to know the full options list for creating a table using csv in Azure databricks notebook. Thanks for ...
ausmod's user avatar
  • 71
0 votes
1 answer
134 views

Referenced Notebook not found

I'm given a notebook to run in Azure Databricks. I imported the notebook to my user folder Workspace\Users\[email protected]. Following command in the notebook gives the error shown below. ...
nam's user avatar
  • 23.7k
0 votes
0 answers
207 views

Azure Data Studio Notebook: Maximum Call Stack size Exceeded

I am using Azure Data Studio to write notebooks and run queries against a sql database instance. When I try running two different cells I get the errore: Maximum Call Stack size Exceeded. Is there a ...
Francesco Pegoraro's user avatar
0 votes
1 answer
101 views

How to replace parameters in one table with values from another table?

I am currently working on creating a JSON file from data in a processed CSV file through an SQL query, and I am performing this task in a notebook in Synapse. Currently, I have the JSON structure ...
tavo92's user avatar
  • 9
0 votes
1 answer
99 views

How to build JSON hierarchy tree using python?

I am currently working with a CSV file generated from a SQL query. This file contains all the fields and calculations necessary for my analysis. My current task is to transform this data into a JSON ...
tavo92's user avatar
  • 9
0 votes
1 answer
229 views

Is it possible to use wildcard in Synapse Notebook?

I want to read parquet files in Synapse Notebook. I tried it using wildcard but the "FileNotFoundError" occurred. The folder structure I want to read is like this. test/year={yyyy}/month={MM}...
CuteeeeRabbit's user avatar
0 votes
0 answers
664 views

Logging Databricks notebook to AppInsights in Python using Open Telemetry

Our databricks notebook is trigerred via an adf pipeline. I would like to add logging in my python notebook and would like to connect that logging information to be viewed in appinsights. I would like ...
Codecrasher99's user avatar
-1 votes
1 answer
195 views

How to open sql notebook saved as markdown as an interactive notebook again?

I just started using azure data studio because i wanted to store my sql code as a notebook. When saving it i saved it as markdown and now when i open i can't get the interactive notebook layout that i ...
Fatima Masud's user avatar
1 vote
1 answer
505 views

Creating a blank delta table in azure synapse analytics with identity column

I'm new to ASA and I am trying to create a blank delta table with an identity column that auto increments by 1. Is there anyway to do this without using dedicated SQL? Tried using TSQL syntax but it ...
rjrpacis's user avatar
1 vote
1 answer
314 views

PySpark - Synapse Notebook don't throw error if dataframe finds no files

I have a Synapse notebook in which I am creating a dataframe based on parquet data. I am also filtering the files, to ensure I only pickup the new files. ReadDF = spark.read.load(readPath,format="...
Oblivi0n's user avatar
0 votes
1 answer
2k views

What is the max limit on databricks text widgets

dbutils.widgets.text('input_query',"") inquery= dbutils.widgets.get('input_query') I tried to give a big string inside the text widget, it is not allowing to pass anything above 2048 ...
Surender Raja's user avatar
0 votes
3 answers
709 views

Access cosmos db through azure synapse analytics notebook using system assigned managed identity linked service

i have made a linked service for cosmos db no sql using system assigned managed identity as auth type and linked service is published as well. Now when i access this linked service from synapse ...
Ali Naqi's user avatar
0 votes
3 answers
2k views

One spark session for all notebook in Synapse

I don't find any solution fot start one apache spark session for all notebooks in one pipeline, any ideas ?
Dev's user avatar
  • 71
0 votes
1 answer
1k views

Can i run databricks notebook cell on if condition... If true run all cell if false run only bottom 5 cells

I want combine my 2 different notebooks and add one condition on received parameter. If parameter is True then run all cell of that notebook and if parameter is false only run added code from another ...
Rashmi Jadhao's user avatar
0 votes
1 answer
984 views

mssparkutils.notebook.exit in Try block

mssparkutils.notebook.exit isn't exiting properly when used in try block, and it is rising exception. Can someone help me understand, why it isn't working inside try block? and how to make it work? ...
practicalGuy's user avatar
  • 1,318
0 votes
1 answer
934 views

Save Spark dataframe to a dynamic Path in ADLS using Synapse Notebook

I am trying to use a Synapse Notebook using Pyspark to read a bunch of parquet files and reprocess them into different folder structure "YYYY/MM/YYYY-MM-DD.parquet based on the created_date of ...
Oblivi0n's user avatar
2 votes
1 answer
854 views

Combine multiple notebooks to run single pipeline

I have 8 seperate notebooks in databricks, so I am currently running 8 different pipelines on ADF, each pipeline contains each notebook and so on. Is there a way to run a single pipeline which runs ...
Keerthana Iyengar's user avatar
0 votes
1 answer
442 views

What is the usage of createGlobalTempView or createOrReplaceGlobalTempView in Synapse notebook?

We know the Spark pool in Synapse will not work like databricks cluster model. We make use of GlobalTempViews in Databricks where they can be attached to cluster and other notebook can access the ...
shanmukh SS's user avatar
0 votes
1 answer
362 views

How to rollback uncommitted changes to a Synapse Notebook?

I have made changes directly to the backing github repo json notebook structure that mirror what has been done in the online notebook. To verify the repo changes I would like to revert the online ...
WestCoastProjects's user avatar
2 votes
0 answers
303 views

How to make use of custom jar in Synapse Notebook after uploading it in Azure Synapse Workspace?

I am trying to add my customized jar in Azure Synapse Workspace to make use of user defined function (udf) present in the jar while running the sql query in Synapse Notebook. An Example: There is udf ...
Pranay Ramtekkar's user avatar
2 votes
1 answer
405 views

Convert azure synapse notebooks [in json] to python or jupyter

Local / IDE based development allows use of the most powerful IDE's [pycharm in particuilar]. How can the notebooks presently in json format in our git repo be converted to jupyter/ipynb and/or ...
WestCoastProjects's user avatar