Building an Application That Can Predict E-Commerce Buying Patterns

As the COVID-19 pandemic is expected to go on for quite some time, many consumers are now opting to shop online straight from their wireless devices for the products and services that they need to procure, rather than visiting the traditional brick and mortar stores for these kinds of activities. Thus, it will become very important for E-Commerce merchants to have an application that can help to predict buying patterns on a real-time basis, and to even gauge what future buying patterns will look like, so that they can appropriately store the needed inventory levels.

Here is the Python source code to help create such an application:

#Load data from input file Input_file = ‘sales.csv”

File_reader = csv.reader (open(inout_file, ‘r’), delimeters=’, ‘

X=[]

For count, row in enumerate (file_reader):

If not count:

Names = row[l:]

Continue

X.append ([float(x) for x in row [1:]])

#Convert to numpy array X= np.array(X)

#Estimating the bandwidth of input data

Bandwidth = estimate_bandwidth (X, quantile=0.8, n_samples=len(x)) #Compute clustering with MeanShift

Meanshift_model = Meanshift (bandwidth=bandwidth, bin_seeding=True) Meanshift_model.fit (X)

Labels = meanshift_model.labels_

Cluster_centers = meanshift_model.cluster_centers_

Num_clusters = len (np.unique(labels))

Print (“ Number of clusters in input data =” num_clusters) Print (“ Centers of clusters:”

Print (‘ .join([name[:3] for in names]))

For cluster_center in cluster_centers:

Print(‘ ’.join([str(int(X)} for X in cluster_center])) #Extract two features for visualization Cluster_centers_2d = cluster_centers[:, 1:3]

#Plot the cluster centers Plt.figure()

Plt.scatter (cluster_centers_2d{:, 0], cluster_centers_2d[:l,l]> S=120, edgecolors=’blue’, facecolors=’none’]

Offset=0.25

Plt.xlim (cluster_centers_2d[:, 0].max() + offset * cluster_ Centers_2d[:, 0].ptp,

Cluster_centers_2d[:,0], max() + offset Muster Centers_2d[:, 0].ptp(),

Plt.ylim (cluster_centers_2d[:, l].max() + offset * cluster_ Centers_2d[:, l].ptp(),

Cluster_centers_2d[:,l], max() + offset *cluster_ Centers_2d[:, l].ptp())

Plt.title ('Centersof 2D Clusters’)

Plt.show()

(Artasanchez &C Joshi, 2020).

Building an Application That Can Recommend Top Movie Picks

As it has been described throughout this book, the use of chatbots is probably one of the biggest applications of not just Artificial Intelligence, but of Neural Networks as well. The idea behind all of this is that the conversation with either the prospect or the customer should be a seamless one, in which he or she is feeling that they are engaging with a real human being. One of the basic thrusts of this is to also to try to predict in advance what the questions, concerns, or queries might be based upon previous conversations and interactions with the chatbot. In this application, we examine how to embed such a conversation when it comes to recommending movies for an individual. In a way, this is a primitive version of what Virtual Personal Assistants (VPAs) like Siri and Cortana can do as well.

Here is the Python source code:

Import argparse Import json

Import numpy as np

From compute_scores import pearson_score

From collaborative_filtering import find_similar_users

Def build_arg_parser ():

Parser = argparse.ArgumentParser (description=’Find recommendations For the given user’)

Parser.add_argument (‘—user’, dest=’user’, required=True,

Help=’Input user’)

Return parser

#Get movie recommendations for the input user Def get_recommendations (dataset, input_user):

If input_user no in dataset

Raise TypeError (‘Cannot find ‘ = input_user + ‘ in the Dataset’)

Overall_scores = {}

Similarity_scores = {}

For user in [x for x in dataset if x [= input_user]:

Similarity_score = pearson_score (dataset, input_user, user)

If similarity_score <=0:

Continue

Filtered_list = [x for x in dataset[user] if x not in

Dataset[input_user] or dataset [input_user] [x] ==0]

For item in filtered_list:

Overall_scores.update ({item: dataset[user] [item] * Similarity_score})

Similarity_scores.update ({item: similarity_score})

If len (overall_scores) == 0:

Return [‘No movie recommendations are possible’} #Generate movie selection rankings by normalization Movie_scores = np.array {[(score/similarity_scores(item), item]

For item, score in overall_scores.items())]}

#Sort in decreasing order

Movie_scores = movie_scores [np.argsort (movie_scores [:, 0]) [::-l]] #Extract the movie selection recommendations Movie_recommendations = [movie for_, movie in movie_scores]

Return movie_recommendations

If_name_= =_main_’:

Args = build_arg_parser().parse_args()

User = args. user Ratings_file = ‘ratings.json’

With open (ratings_file, ‘r’) as f:

Data = json.loads (f.read())

Print (“ Movie recommendations for” + user +“:”)

Movies = get_recommendations (data,user) For I, movie in enumerate (movies):

Print (str(i+l) + ‘ + movie)

(Artasanchez &t Joshi, 2020).

 
Source
< Prev   CONTENTS   Source   Next >