[SOLVED] Python Script not working when I add def myfucntion():

Issue

I have some code that works when I run it separately, but when I run it when defining it as a function is doesn’t work. I don’t get any errors, so it does run, however it doesn’t pull back the latest files or update the CSV file, it just says the same as the previous day.

This is to update a report and was previously working when a colleague ran it but I can’t get it to work myself. The code directly below is what works:

def typose(): 
    today = datetime.today().strftime('%d%m%Y')
    

    yesterday = datetime.now() - timedelta(1)
    yesterday1 = yesterday.strftime('%d%m%Y')
  
    




###############################################################################
#################### ESTABLISH CONNECTION TO ESENDEX SFTP #####################
###############################################################################    


# Open a transport
# host,port = "sftp.esendex.com",22
    host,port = "10.132.0.1",22
    transport = paramiko.Transport((host,port))

# Auth    
    username,password = "bocsurveys","lfxDmr4i"
    transport.connect(None,username,password)

# Go!    
    sftp = paramiko.SFTPClient.from_transport(transport)


###############################################################################
######################## PICK UP THE FILE FOR THE SMS #########################
############################### FROM ESENDEX ##################################

# Download the SMS
    filepathsms = "/FromEsendex/CX_Survey_SMS_output_2_"+today+".csv"
    localpathsms = "C:/Users/l0ad06/Desktop/Daily Feedback from Esendex/CX_Survey_SMS_output_2_"+today+".csv"
    sftp.get(filepathsms ,localpathsms)

    filepathsms2 = "/FromEsendex/CX_Survey_SMS_output_1_"+yesterday1+".csv"
    localpathsms2 = "C:/Users/l0ad06/Desktop/Daily Feedback from Esendex/CX_Survey_SMS_output_1_"+yesterday1+".csv"
    sftp.get(filepathsms2 ,localpathsms2)


    filename = "C:/Users/l0ad06/Desktop/Daily Feedback from Esendex/CX_Survey_SMS_output_2_"+today+".csv"
    filename2 = "C:/Users/l0ad06/Desktop/Daily Feedback from Esendex/CX_Survey_SMS_output_1_"+yesterday1+".csv"



###############################################################################
################## CREATING ONE RECORD PER DELIVERY NUMBER ####################
###############################################################################
    ##df1 = pandas.read_csv(filename,
    ##                usecols= ['Question Label','Answer Label',
    ##                         'Answer DateTime','Delivery Number',
    ##                         'ShipTo Number'], encoding= 'unicode_escape')
    
    df1 = pandas.read_csv(filename, usecols =[2,4,5,12,23],
                   encoding= 'unicode_escape')


    df1 = df1.rename(columns= {df1.columns[0]: "Question Label",
                             df1.columns[1]: "Answer Label",
                             df1.columns[2]: "Answer DateTime",
                             df1.columns[3]: "Delivery Number",
                             df1.columns[4]: "ShipTo Number"})

     
    
# Filter only the records with scores
    clean_data1 = df1[df1['Question Label'] != 2]
    clean_data1 = clean_data1[clean_data1["Question Label"].notnull()]
    clean_data2 = clean_data1[clean_data1['Answer Label'] != 'Error']

    clean_df1 = pandas.DataFrame(clean_data2,
                       columns = ['Answer Label',
                                  'Answer DateTime',
                                  'Delivery Number',
                                  'ShipTo Number'])

# Rename the columns
    cleandf1 = clean_df1.rename(columns={"Answer Label": "Score",
                        "Answer DateTime": "Created",
                        "Delivery Number": "Delivery",
                        "ShipTo Number": "ShipTo" }) 




    ##df2 = pandas.read_csv(filename,
    ##               usecols= ['Question Label',
    ##                       'Answer DateTime',
    ##                     'Answer Text',
    ##                   'Delivery Number',
    ##                 'ShipTo Number'], encoding= 'unicode_escape')
    
    
    df2 = pandas.read_csv(filename, usecols =[2,5,6,12,23],
                   encoding= 'unicode_escape')
  

    df2 = df2.rename(columns= {df2.columns[0]: "Question Label",
                             df2.columns[1]: "Answer DateTime",
                             df2.columns[2]: "Answer Text",
                             df2.columns[3]: "Delivery Number",
                             df2.columns[4]: "ShipTo Number"})


# Filter only the records with comments
    clean_data3 = df2[df2['Question Label'] != 1]
    clean_data3 = clean_data3[clean_data3["Question Label"].notnull()]
    clean_df2 = pandas.DataFrame(clean_data3,
                       columns = ['Answer Text',
                                'Delivery Number',
                                'ShipTo Number'])

# Rename the columns
    cleandf2 = clean_df2.rename(columns={"Answer Text": "Comment",
                                   "Delivery Number": "Delivery",
                                   "ShipTo Number": "ShipTo" }) 





  ##  df3 = pandas.read_csv(filename,
  ##                usecols= ['Classification Code','Classification Text',
  ##                              'Country Code',
  ##                            'Customer Post Code','Delivery Number',
  ##                             'GroupTo Code','GroupTo Name',
  ##                            'PGI Date','Plant Code',
  ##                           'Plant Name','Pricing Area',
  ##                           'Pricing Area Text','Sales Organisation',
  ##                         'ShipTo Number'], encoding= 'unicode_escape')
    
    df3 = pandas.read_csv(filename,
                    usecols= [7,8,9,11,12,13,14,16,17,18,19,20,22,23], encoding= 'unicode_escape')
  



    df3 = df3.rename(columns = {df3.columns[0]: "Classification Code",
                              df3.columns[1]: "Classification Text",
                              df3.columns[2]: "Country Code",
                              df3.columns[3]: "Customer Post Code",
                              df3.columns[4]: "Delivery Number",
                              df3.columns[5]: "GroupTo Code",
                              df3.columns[6]: "GroupTo Name",
                              df3.columns[7]: "PGI Date",
                              df3.columns[8]: "Plant Code",
                              df3.columns[9]: "Plant Name",
                              df3.columns[10]: "Pricing Area",
                              df3.columns[11]: "Pricing Area Text",
                              df3.columns[12]: "Sales Organisation",
                              df3.columns[13]: "ShipTo Number"})


    # dropping ALL duplicte values 
    clean_df3 = df3.drop_duplicates() 
   

    cleandf3 = clean_df3.rename(columns={"Classification Code": "Classification_Code",
                                 "Classification Text": "Classification_Text",
                                 "Customer Post Code": "Customer_Postcode",
                                 "Country Code": "Country_Code",
                                 "Delivery Number": "Delivery",
                                 "GroupTo Code": "Group_To",
                                 "GroupTo Name": "Group_To_Name",
                                 "PGI Date": "PGI_Date",
                                 "Plant Code": "Plant",
                                 "Plant Name": "Plant_Name",
                                 "Pricing Area": "Pricing_Area",
                                 "Pricing Area Text": "Pricing_Area_Description",
                                 "Sales Organisation": "Sales_Organisation",
                                 "ShipTo Number": "ShipTo"  }) 




# Join the tables
    result1 = pandas.merge(cleandf1, cleandf2, how='left', on=['Delivery','ShipTo'])


    result2 = pandas.merge(result1, cleandf3, how='left', on=['Delivery','ShipTo'])

# Check the data types
    result2.dtypes

    result2['Created'] = pandas.to_datetime(yesterday)
    
  
# Change the data types
    result2 = result2.astype({'Score': 'str',
                           'Created': 'datetime64[ns]',
                           'Delivery': 'int64',
                           'ShipTo': 'int64',
                           'Comment':'str',
                           'Classification_Code':'str',
                           'Classification_Text':'str',
                           'Country_Code':'str',
                         'Customer_Postcode':'str',
                         'Group_To': 'float',
                         'Group_To_Name':'str',  
                        'PGI_Date': 'int64',
                         'Plant':'int64',
                         'Plant_Name':'str', 
                         'Pricing_Area':'str', 
                         'Pricing_Area_Description':'str', 
                         'Sales_Organisation': 'str'
                         })


    

# Add a column that will give us the channel
    result2['Channel'] = 'SMS'

# Export to a csv
    result2.to_csv(r'C:/Users/l0ad06/Desktop/Daily Feedback from Esendex/CX_Survey_SMS_output_2_'+today+'.csv', index = False)

    schedule.every().day.at("09:00").do(typose)

Does anyone know why it doesn’t work when I add def typose():?

Solution

There is a possiblity that the issue might be because of the following reasons!

  1. Indentation

You need to instead of doing the following

 def typose():
     today = datetime.today().strftime('%d%m%Y')
  1. Checking calling function

Try checking if the following command does actually run the code, you can
check that by doing this simple test:
Change schedule.every().day.at("09:00").do(typose)
To typose()
If it works, then schedule.every().day.at("09:00").do(typose) does not run or call the function typose, so then you can change it to an if statment if it is True it runs the code.

  1. Check the function code

There might be issue in typing like wrong value which executes without error, but does not work properly!

If this doesn’t help, let me know

Answered By – Omar The Dev

Answer Checked By – Jay B. (BugsFixing Admin)

Leave a Reply

Your email address will not be published.