Trending March 2024 # Linear Programming & Discrete Optimization With Pulp # Suggested April 2024 # Top 8 Popular

You are reading the article Linear Programming & Discrete Optimization With Pulp updated in March 2024 on the website Hatcungthantuong.com. We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested April 2024 Linear Programming & Discrete Optimization With Pulp

This article was published as a part of the Data Science Blogathon.

Linear programming, also called mathematical programming, is a branch of mathematics that we use to solve a system of linear equations or inequalities to maximize or minimize some linear function.

The objective is to find the optimal values of the objective function, which is why this problem statement is also known as Linear Optimization, where we optimize the solutions while simultaneously minimizing or maximizing the objective function.

Discrete Optimization & Solving Techniques

With this, let us take a step back and understand what Discrete Optimization is! Discrete optimization is a part of optimization methodology that deals with discrete quantities, i.e., non-continuous functions. Familiarity with such optimization techniques is of paramount importance for data scientists & Machine Learning practitioners as discrete and continuous optimization problems lie at the heart of modern ML and AI systems as well as data-driven business analytics processes. The resolutions of the problems within this domain lie in the techniques of Linear programming(as defined above) & Mixed-Integer Linear Programming.

We use Mixed-Integer linear programming in solving problems with at least one of the variables ( for now, consider these as the independent variables of any equation) that is discrete rather than being continuous. On the surface level, it might seem that it is better to have discrete values rather than continuous real nu

Source: Author

 Unfortunately not! I shall answer this question after outlining some basic terminologies in Linear programming for a coherent understanding.

Basic Terminologies of Linear Programming

Consider a small linear programming problem below: –

The motive here is to find out the optimal values of x & y such that the inequalities are given in red, blue & yellow, and inequalities x ≥ 0 and y ≥ 0 are satisfied. The optimal values are those that maximize the objective function(z).

Objective Function – It is also known as the cost function or the “goal” of our optimization problem. Either we’ll maximize or minimize it based on the given problem statement. For example, it could be maximizing the profit of a company or minimizing the expenses of the company due to its regular functioning.

Decision Variables –The independent variables to be found to optimize the objective function are called the decision variables. In this case, x and y are the decision variables that are the unknowns of the mathematical programming model.

Constraints – The restrictions on the decision variables in an optimization problem are known as constraints. In the above example, the inequalities in red, blue & yellow are the constraints. Note that these could very well be equations or equality constraints.

Feasible Region  – A feasible region or solution space is the set of all possible points of an optimization problem that satisfy the problem’s constraints. Let us understand the feasible region with the help of visualization of the constraints given in the example: –

There is a red area above the red line that implies the inequality 2x + y = 20 is unsatisfactory. In the same way, the blue area violates the blue inequality because it is the function -4x + 5y = 10. The yellow line is −x + 2y = −2, and the yellow area below it is where the yellow inequality isn’t valid. Each point of the gray area satisfies all constraints and is a potential solution to the problem. This area is called the feasible region, and its points are feasible solutions. In this case, there’s an infinite number of feasible solutions. That’s also known as the Graphical Method. For a better understanding of this method, please refer to the video.

Now that we are familiar with the terminologies let us throw some more light on Mixed-Linear integer programming. This type of problem is important, especially when we express quantities in integers like the number of products produced by the machine or the number of customers served. Mixed-integer linear programming allows for overcoming many demerits of linear programming. One can approximate non-linear functions with piecewise linear functions, model-logical constraints  (a yes/no problem like whether a customer will churn or not), and more. A piecewise linear function is a segmented real-valued function where the graph consists of line segments only & the data follows a different linear trend over different regions of data. We would not go into the details of it but you could read more about it here.

Now, remember a question that arises whether or not discrete values are better than continuous values -let us take that up. So, integer programming, as the name suggests, compels some or all of the variables to assume only integer values. Hence, integer variables make an optimization problem non-convex, and therefore, far more difficult to solve.

Why does Convexity matter?

A convex optimization problem is a problem where all of the constraints are convex functions, and the objective is a convex function if minimizing or a concave function if maximizing. Linear functions are convex, so linear programming problems are convex problems.

A function is called convex- if a line segment is drawn from any point (x, f(x)) to another point (y, f(y)), from x to y — lies on or above the graph of f, as shown in the figure below.

Source: Author

For a convex function, it is just opposite – line segment lies on or below the graph of the function f. This is only possible as the values in [x, y] are continuous.

Source: Author

Non-convex optimization may have multiple locally optimal points, and it can take a lot of time to identify whether the problem has no solution or if the solution is global. Hence, the efficiency of the convex optimization problem in time is much better. Memory and solution time may rise exponentially as you add more integer variables. If you wish to dive deeper into mathematics, then please go through this article.

Unbounded and Infeasible Linear Programming

Till now, we have just discussed feasible linear programming problems because they have bounded feasible regions and finite solutions. But we might come across problems where the feasible regions are not bounded, or the problem is infeasible.

Infeasible LP – A linear programming problem is infeasible if it doesn’t have a solution. This usually happens when no solution can satisfy all constraints at once.

For example, consider adding an inequality constraint (x + y <= -3) in the above-given example with color-coded inequalities. In this case, one of the decision variables has to be negative right? This would consequently violate the non-negative constraints. Such a system doesn’t have a feasible solution, so it’s called infeasible.

Unbounded Solution – A linear programming problem is unbounded if its feasible region isn’t bounded and the solution is not finite. This means that at least one of your variables isn’t constrained and can reach up to positive or negative infinity, making the objective infinite as well.

For example, if we’ll remove red and yellow inequalities, then we can’t bound decision variables on the positive end. This would make the decision variables (x or y) limitless and the objective function infinite as well.

Solvers

There are several suitable and great Python tools for linear programming and mixed-integer linear programming problems. Some of them are open source, while others are proprietary. Almost all widely used linear programming and mixed-integer linear programming libraries are native to and written in Fortran or C or C++. This is because linear programming requires computationally intensive work with (often large) matrices. Such libraries are called solvers. The Python tools are just wrappers around the solvers. Think of it as Numpy in Python, which provides a C-API to enable users to get access to the array objects.

Also, note that there are many optimizer tools for solving LP problems like TORA. But, the knowledge of performing optimization programmatically is second to none! So we should have an excellent knowledge of that too.

Implementation in Code

Now, I know you must be tired of the theoretical concepts and would probably need a “Well deserved” break. Familiarity with the foundational concepts is important, but do not fret. We will now straight away jump to a Resource Allocation Problem and will look into its implementation in Python. Several free Python libraries are also available for interacting with linear or mixed-integer linear programming solvers, including the following: –

For this article, we shall be using PuLP for tackling the LPP.

Problem

In addition, you’ll have to consider some restrictions like budget and the variety of food. You can download the data set from here and the jupyter notebook from my Github repository.

Intuition

Minimize the cost of the best food plan(inclusive of food items), given some constraints (on total calories but also each of the nutritional components, e.g., fat, vitamin C, iron, etc.).

The Cost function is the total cost of the food items, which we are trying to minimize. Why? Because the cost should be minimal, at the same time, deriving the nutritional value from the combination of different food items should be maximum, considering the maximum and minimum constraints given in the data. A nutritional component’s minimum and maximum bounds define the inequality constraints.

You can install PuLP using Pip and then import everything from the library.

pip install pulp  #If directly running on Jupyter then you could use the following: - !pip3 install pulp #Import everything from the library from pulp import * import numpy as np, pandas as pd import warnings warnings.filterwarnings('always') warnings.filterwarnings('ignore') Formulation of the Optimization Problem

First, we create an LP problem using the method LpProblem in PuLP.

prob = LpProblem('Diet_Problem', LpMinimize)

Now, we read the data set and create some dictionaries to extract information from the diet table. Note we are reading only the first 17 rows with the “nrows=17” argument because we just want to read all the nutrients’ information and not the maximum/minimum bounds in the dataset. We will enter those bounds in the optimization problem separately.

df = pd.read_excel('diet.xls',nrows=17) df.head() #Here we see the data

Source: Author

Now we create a list of all the food items and thereafter create dictionaries of these food items with all the remaining columns. The columns denote the nutrition components or the decision variables here. Apart from the price per serving and the serving quantity, all the other columns denote the nutritional components like fat, Vitamins, cholesterol, etc. In this optimization problem, the minimum and maximum intakes of nutritional components are specified, which serve as constraints.

food = list(df.Foods) #The list of items count=pd.Series(range(1,len(food)+1)) print('List of different food items is here follows: -') food_s = pd.Series(food) #Convert to data frame f_frame = pd.concat([count,food_s],axis=1,keys=['S.No','Food Items']) f_frame

Source: Author

Now we create dictionaries of the food items with each of the nutritional components.

# Create a dictinary of costs for all food items costs = dict(zip(food,df['Price/Serving'])) #Create a dictionary of calories for all items of food calories = dict(zip(food,df['Calories'])) #Create a dictionary of cholesterol for all items of food chol = dict(zip(food,df['Cholesterol (mg)'])) #Create a dictionary of total fat for all items of food fat = dict(zip(food,df['Total_Fat (g)'])) #Create a dictionary of sodium for all items of food sodium = dict(zip(food,df['Sodium (mg)'])) #Create a dictionary of carbohydrates for all items of food carbs = dict(zip(food,df['Carbohydrates (g)'])) #Create a dictionary of dietary fiber for all items of food fiber = dict(zip(food,df['Dietary_Fiber (g)'])) #Create a dictionary of protein for all food items protein = dict(zip(food,df['Protein (g)'])) #Create a dictionary of vitamin A for all food items vit_A = dict(zip(food,df['Vit_A (IU)'])) #Create a dictionary of vitamin C for all food items vit_C = dict(zip(food,df['Vit_C (IU)'])) #Create a dictionary of calcium for all food items calcium = dict(zip(food,df['Calcium (mg)'])) #Create a dictionary of iron for all food items iron = dict(zip(food,df['Iron (mg)']))

We just observe one of these dictionaries to see what it looks like. Here we take the example of ‘iron’ with all the food items in the data.

Source: Author

We now create a dictionary of all the food items’ variables keeping the following things in mind: –

The lower bound is equal to zero

The category should be continuous i.e. the decision variables could take continuous values.

Such adjustment is necessary to enable the non-negative conditions as anyway negative quantities of food is not possible right? Imagine -1 block of tofu -would not make any sense at all! Furthermore, it ensures that the values have a real value.

# A dictionary called 'food_vars' is created to contain the referenced Variables food_vars = LpVariable.dicts("Food",food,lowBound=0,cat='Continuous')

Source: Author

Objective Function Addition prob += lpSum([costs[i]*food_vars[i] for i in food]) prob

We build the linear programming problem by adding the main objective function. Note the use of the lpSum method. Note – we have the optimum solution for our problem and the output given below indicates the same.

Source: Author

Addition of the Constraints

Note that we further put our constraints now based upon the maximum and the minimum intake of the nutritional components in our data set. Do not forget the motive, we intend to minimize the cost considering these constraints on the components or the decision variables.

For the sake of simplicity, and to maintain brevity, I am planning to define only five constraints. Now the lpSum method helps in calculating the sum of the linear expressions, so we will use it to define the constraint of Calories in the data.

lpSum([food_vars[i]*calories[i] for i in food])

Source: Author

Hence in the two output images above, we could compare that lpSum helps in taking the sum-product of the food items with their respective size of the nutritional component taken under consideration (Calories in this case)

Note that we still have not defined the constraints. We define it by adding the same to our problem statement (Recall the objective function and the constraints are part of the LPP)

prob += lpSum([food_vars[x]*calories[x] for x in food]) <= 1300, “CaloriesMaximum” prob Source: Author

Hence, the output shows how beautifully the function works. We have the objective function which is subject to the “Calories” constraint as defined in the code above. After the definition of one of the five components of nutrition, let’s move ahead and define the four remaining components as well to formulate the problem.

The five nutritional components that I have chosen are Carbohydrates, Fat, Protein, Vitamin A & Calcium.

#Carbohydrates' constraint prob += lpSum([food_vars[x]*carbs[x] for x in food]) <= 200, "CarbsMaximum" #Fat's constraint prob += lpSum([food_vars[x]*fat[x] for x in food]) <= 50, "FatsMaximum" #Protein's constraint prob += lpSum([food_vars[x]*protein[x] for x in food]) <= 150, "ProteinsMaximum" #Vit_A constraint prob += lpSum([food_vars[x]*vit_A[x] for x in food]) <= 10000, "Vit_A_Maximum"

This concludes the formulation of the LPP. The most significant part is the building up of the problem statement with the objective function and the constraints defined correctly. The solving is just a cakewalk (at least in programming!)

Running the Solver Algorithm

Finally, we have our problem ready and now we shall run the Solver algorithm. Note that we could pass parameters to the solver when running it, but in this case, I will run it without any parameters and let it determine the best algorithm to run based on the structure of the problem. Yes, this library is quite optimal to do so!

prob.solve()

PuLP allows you to choose solvers and formulate problems more naturally. The default solver used by PuLP is the COIN-OR Branch and Cut Solver (CBC). It will connect to the COIN-OR Linear Programming Solver (CLP) for linear relaxations.

prob. solver

Now we print the status of the problem. Note that if we don’t formulate the problem well, then it might have an “infeasible” solution, or if it does not provide sufficient information, then it might be “Unbounded”. But our solution is “optimal”, which means that we have optimum values.

LpStatus[prob.status]

You can get the optimization results as the attributes of the model. The function value() and the corresponding method .value() return the actual values of the attributes. prob.objective holds the value of the objective function, prob.constraints contains the values of the slack variables (we don’t require them but just a good to know fact).

for var in prob.variables(): print(f'Variable name: {var.name} , Variable value : {var.value()}n') print('n') print('*'*100) print('n') #We can also see the slack variables of the constraints for name, con in prob.constraints.items(): print(f'constraint name:{name}, constraint value:{con.value()}n') print('*'*100) print('n') ## OBJECTIVE VALUE print(f'OBJECTIVE VALUE IS: {prob.objective.value()}')

Source: Author

Value of the Objective Function

The optimal value of the objective function is $5.58. The interpretation of the result would be the following: –

2.64 servings of baked potatoes

4.02 servings of  scrambled eggs

1.23 servings of Roasted chicken

1.41 servings of Frozen broccoli

Conclusion

So, with the optimal value, we have charted out a diet plan that minimizes the budget (cost of the diet plan) and maximizes the nutritional components for the individual.

takeaway here is that we could attain the best outcome basis linear inequalities (or equalities) through the optimization technique described above.

Kindly note that you could get the optimization problems in different ways. For example, imagine you have a business strategy and some business practices that when applied might lead to a significant increase in profits. Hence, your job, in this case, is to maximize the profitability of the business by selecting the best strategies out of the available strategies.

Another example could be a factory production plan. Each machine has a maximum production capacity and it produces different items with different characteristics. As an engineer, your job would be to ensure the maximum output of the items considering all the capacity constraints of the machinery.

Since we did not mention any parameter hence the PuLP uses the default solver CBC for the resolution. We could have used a different solver for our solution such as GNU Linear Programming Kit(GLPK). Some well-known and very powerful commercial and proprietary solutions are Gurobi, CPLEX, and XPRESS.

I hope I was able to give you some concrete idea of how we go about solving Linear programming problems using PuLP. If you think any portion is ambiguous or needs improvement, you are most welcome to make suggestions. Thank you for reading!

Hi there! Currently, I am working as a credit analyst in the field of Finance after pursuing a Bachelor’s degree in Statistics.

My areas of interest include Data Science, ML/DL, Statistics, and BI/Analytics. Please feel free to reach out to me on LinkedIn and have a look at some of my other projects on my Github profile. You could also connect with me via mail for absolutely anything!

The media shown in this article is not owned by Analytics Vidhya and are used at the Author’s discretion. 

Related

You're reading Linear Programming & Discrete Optimization With Pulp

Defining Redis’ Linear Scalability And Performance

Introduction to Redis Architecture

Redis architecture contains single as well as multiple instance models. It mainly focuses on a single redis instance, redis HA, redis cluster, and redis sentinel. As per our use case, we need to select the architecture of redis. The single instance is a very straightforward deployment in a redis, it will allow the user to run and set the small instances which helps to grow and speed services.

Key Takeaways

It includes a key-value database server as well as a data structure server. Redis is a memory database, so all data is stored in memory.

It contains master slave, replication, high availability, and sentinel, we have also defined a single cluster in a redis.

What is Redis Architecture?

Redis cluster replication architecture is implemented from the redis goals for defining the design importance. It defines linear scalability and high performance. There are no proxies, asynchronous replication is used, and no merge operations are carried out. Redis will use a primary replica architecture that supports asynchronous replication, with data replicated to multiple replica servers.

Start Your Free Software Development Course

Web development, programming languages, Software testing & others

Explanation of Redis Architecture

It contains two main processes, first is the redis client and another is the redis server. We can install the redis server and client in the same system or we can also install the same on two different machines. Multiple clients are connected to a single server at the same time for processing their requests.

Below image shows the architecture of redis as follows:

Redis server is responsible for storing data in memory. It will handle all the management and forms an important part of the architecture. Redis server is a very important part of the architecture. Redis client is nothing more than a redis console with a programming language for the redis API. Redis stores all data in primary memory. Because Redis primary memory is volatile, we lose data when the server is restarted. Redis supports the following platforms for data persistence.

The following example shows two parts, one client and one server.

RDB – RDB makes a copy of all data from memory and stores it in permanent storage at the specified interval that we have defined.

AOF – This logs all write operations received from the server, allowing the data to be persistent.

Save Command – By using the save command, the Redis server forces the creation of an RDB snapshot at any time.

Redis also supports replication for fault tolerance and data accessibility. We can group two or more servers from the specified cluster to increase storage capacity.

Redis Architecture Master Slave

Redis master slave is very easy to use. One master server and one or more secondary slave servers will be included. It is simple to configure the redis master slave architecture, which will allow us to use redis servers as exact copies of the master server. The architecture of redis master slave is shown below. We can see in the figure below that we have defined a single master server as well as two slave servers.

They will have two parts as follows:

Master

Slave

While defining the master slave architecture, master accepts the read and write operations while the slave accepts only read operations.

There is a single master and multiple slave servers when using master slave architecture. All write operations are routed to the master, increasing the load on the master server. If the master fails, the entire master slave architecture is defined as a single-point failure. Redis master slave architecture is not supporting scaling at the time our user is growing. As we know, data is written on the master server, and copies of that data are sent to the secondary server. The replica server in a master slave architecture will support read operations and will also be useful during failover.

Redis Architecture Processes

Redis client

Redis server

In it, both processes are very important. Basically, the redis client contains multiple processes whereas redis server contains a single process. We can install the redis client on the same machine where our redis server exists, also we can install the redis client on another system.

The below diagram shows the processes as follows:

In the above example, we can see that the redis client is used to send the request to the redis server. After sending the request, the redis server checks it. Before checking the request, it will authenticate the user. After successful authentication, it will process the request and return the result to the user. If authentication fails, the client will receive an authentication failure error. At the time of defining the processes, three mechanisms are used i.e. AOF, RDB, and save command. The redis client and redis server are important processes in the architecture of redis.

Replication

Redis replication architecture is a technique that uses multiple computers to enable data access and fault tolerance. In a replication environment, multiple computers share data with one another, so if one or more computers fail, data is still available on other computers.

In the redis replication architecture, all slaves contain the same data as the master. When a new slave is added, the server master automatically syncs data to the newly added slave. In the redis replication architecture, all queries are redirected to the master server; if the master detects any write operations, it will replicate data to the slave server. If a large number of read operations occur, the master server will distribute them to the slave server.

If suppose slave server fails then the environment is also working or, there is no disruption in data consistency. When the server starts working again the master again sends the updated data to the slave server.

If the master server crashes and loses all of its data, we are converting the slave to the master. Replication will assist us in the event of a disc failure, as well as in the event of hardware failure and the execution of read queries.

FAQ

Given below are the FAQs mentioned:

Q1. Which process are we using in redis architecture?

Answer: While using redis architecture we are defining mainly two processes i.e. redis client and redis server.

Q2. Which mechanism we are using for data persistence in it?

Answer: We are using three mechanisms in it for data persistence save command, AOF, and RDB.

Q3. What is the use of RDB in redis architecture?

Answer: In it, RDB is used to make the copy of data from memory, after making the copy it will store data on disk. It will be used for data consistency.

Conclusion

For defining the design importance, the redis cluster replication architecture is implemented from the redis goals. The architecture of Redis defines linear scalability and high performance. Redis architecture includes both a single instance model and multiple instances, models. It is mainly concerned with a single redis instance, redis HA, redis cluster, and redis sentinel.

Recommended Articles

This is a guide to Redis Architecture. Here we discuss the introduction, redis architecture master slave, processes, and replication. You may also have a look at the following articles to learn more –

Important On Scratch Programming Examples

All about the Scratch Programming Examples

Scratch Programming is a language developed to ease the activity of writing programs for the purpose of programming games, doing animations, enhancing music, and more. Scratch programming examples were primarily designed to target children starting from the age of 10 years and older and were developed to teach people about the computational thought process and how a simple language can be a powerful building block towards the approach of software development, which focuses more on developing a stable application rather than just syntaxes like C or C++.

Start Your Free Software Development Course

Web development, programming languages, Software testing & others

What are Scratch Programming examples?

Just in case you are a beginner and want to learn something exciting, buy yourself a Raspberry Pi. It comes pre-installed with an Operating system called NOOBS (New Out Of the Box), which again comes pre-installed with Scratch. And if you don’t know what Raspberry Pi is, it’s a microcomputer.

How to Get Started with Scratch Programming Examples?

Scratch Programming examples are extremely fun to learn. To get more basics, simply download the official documentation from its website, which will give you an overview of Scratch.

The basic requirements for Scratch would be as follows:

800×600 display or larger (though the official recommendation is 800 × 480, but it lags too much).

16-bit color depth (32-bit recommended).

Win7 or later for Windows.

150 MB of disk space (200 recommended depending upon applications and modules installed).

512 MB of RAM or higher.

What can Scratch do?

You can learn simulation from Scratch. Simulation means it can create a virtual demonstration by imitating things that can be done in real life. You can also create Multimedia objects such as puzzles, 3d presentations, quizzes, and many more. And if you are good at math, you can also create interactive and non-interactive Art Projects. Scratch programming examples are awesome for developing interactive musical instruments and Games.

But these are just the basics. You may be wondering what the real-world implications of Scratch are. So, let’s get on to it.

Scratch Programming exercises are awesome for starting a robotics career (if you are a beginner). If you are not content with Raspberry Pi, you can buy a PicoBoard that looks like this:

Picoboard is a piece of hardware that allows you to interact with the real world using Scratch. It has a slider button and alligator clips and can sense sound and light. Here scratch programming can help in controlling robots, LEDs, and various other sensors. If that’s not it, it can also control your microphone, its volume sensor, a camera connected to it, and a Joystick programmed to control your robot. In fact, a scratch can also work with Arduino chipsets.

What’s Next?

Scratch Programming was designed specifically to ensure creativity and allow the developers to discover their own creative ideas and apply them in reality in images and sounds to invent multimedia software on the go. A young programmer with just a few days of experience can develop games, create animations, and write a similar pieces of code with Scratch Projects.

The Scratch programming environment consists of a small screen space with multiple programmable modules, which we know as sprites. Sprite’s behavior allows the sprite in the program to move around the display monitor and return a response when different events trigger. These events include interactions with various similar sprites and user keyboard interactions.

Each sprite has a specific desired type of costume available, which can modify its appearance on stage (remember the aforementioned screen space) to produce different animations and effects. Sprite can also make Speech bubbles and sounds with mp3 files.

Here, the scratch coding in scratch language returns a response by capturing blocks from the project applications and inserting them into the spaces requested for each programming syntax. This avoids the unnecessary hype of typing in syntaxes and allows young minds (kids and beginners) to develop programs with as little debugging as possible.

This discards the possibility of syntax errors caused by typing in incorrect keywords. Every other sprite here contains multiple coded scripts programmed to run a sequence of operations activated and executed by the sprite each time a specific event occurs. The control blocks run iterations of syntaxes because they can be recursive as many times as needed or fall in a loop forever to define the nature of the sprite properly.

Conditional statements are similar to other languages, which allow performing multiple sequences of commands depending upon the current status of the Scratch programming exercises environment.

Scratch Mentality

Scratch Programming for Beginners Projects was specifically designed to develop robotics and make it as easy as possible. Most things in the above para will go over the head if you read this blog without installing Scratch programming. Since Scratch tends to inspire young developers, this was another reason why Scratch Programming allows sprites to interact with each other by broadcasting messages mutually and responding quickly to these messages by calculating the distance to the nearest sprite.

Community and Project Hubs

Projects based on Scratch Programming are easily available online and can run on any web browser that supports Java applets. Scratch programmers are motivated to upload their projects on the Scratch Programming website (10 MB is the max file size, this encourages developers to write programs in as compressed format as possible, leading to compact pieces of code).

The projects are shared on the official scratch website. These are visible to every other person independent of registered users. This allows other users to download, modify and enumerate the program depending on their requirements. This is somewhat similar to the Open source BSD-style license of Go Lang programming languages.

In Scratch Programming, independent sprites can also be added and removed to and from the projects downloaded from the website. Scratch is more suited to developing applications based on creating small games, puzzles, entertainment programs, and storytelling animations similar to Flash programs. These are extremely easy with the help of sprites moving around the stage with sounds and speech bubbles.

A simple board like the PicoBoard and Raspberry Pi can also be purchased on Amazon, eBay, and SparkFun, which comes preinstalled with a few sensors, including light sensors (this is applicable only to pico board; Raspberry Pi comes independent of these sensors; however, you can purchase them independently). This allows Scratch-developed programs to interact with the real world.

Conclusion

Scratch Programming is an awesome programming language, but it also has limitations. However, if a programmer starts his basics from scratch, he will feel unconfident among other programmers with C, C++, Python, or Ruby Programming experience. Scratch is just a piece of cake in comparison to languages like Java.

The reason is that Scratch programming exercises tend to motivate young programmers to easily develop applications that are intelligent and are also fun at the same time. These applications are thus attractive since they can interact with the user, change their on-screen appearance, move, and make different sounds.

But not ignore that Scratch Programming exercises lessons provide a top-notch interface to enhance young programmers’ creativity and encourage them to build more programs and learn them by sharing. This can be a good creative foundation for children whose parents want them in creative fields like animation or robotics.

Scratch Programming Language is not perfect, but it is necessary. It introduces young developers to a totally new world and encourages schools to teach these to children, enhancing their mentality.

Recommended Articles

Here are some further related articles to learn more:

Twitter Sentiment Analysis Using Python Programming.

Sentiment Analysis is the process of estimating the sentiment of people who give feedback to certain event either through written text or through oral communication. Of course the oral communication also has to be converted to written text so that it can be analysed through python program. The sentiment expressed by people may be positive or negative. By assigning weightage to the different words in the sentiment text we calculate a numeric value and that gives us a mathematical evaluation of the sentiment.

Usefulness

Customer Fedback − It is vital for business to know the customer’s opinion about product or services. When the customer’s feedback is available as written text we can run the sentiment analysis in Twitter to programmatically find out the overall feedback as positive or negative and take corrective action.

Political Campaigns − For political opponents it is very vital to know the reaction of the people to whom they are delivering the speech. If the feedback from the public can be gathered through online platforms like social media platforms, then we can judge the response of the public to a specific speech.

Government Initiatives − When the government implements new schemes from time to time they can judge the response to the new scheme by taking public opinion. Often the public put their praise or anger through Twitter.

Approach

Below we list the steps that are required to build the sentiment analysis program in python.

First we install Tweepy and TextBlob. This module will help us gathering the data from Twitter as well as extracting the text and processing them.

Authenticating to Twitter. We need to use the API keys so that the data can be extracted from tweeter.

Then we classify the tweets into positive and negative tweets based on the text in the tweet.

Example import re import tweepy from tweepy import OAuthHandler from textblob import TextBlob class Twitter_User(object):    def __init__(self):       consumer_key = '1ZG44GWXXXXXXXXXjUIdse'       consumer_secret = 'M59RI68XXXXXXXXXXXXXXXXV0P1L6l7WWetC'       access_token = '865439532XXXXXXXXXX9wQbgklJ8LTyo3PhVDtF'       access_token_secret = 'hbnBOz5XXXXXXXXXXXXXefIUIMrFVoc'       try:          self.auth = OAuthHandler(consumer_key, consumer_secret)          self.auth.set_access_token(access_token, access_token_secret)          self.api = tweepy.API(self.auth)       except:          print("Error: Authentication Failed")    def pristine_tweet(self, twitter):    def Sentiment_Analysis(self, twitter):       audit = TextBlob(self.pristine_tweet(twitter))       # set sentiment          return 'positive'       elif audit.sentiment.polarity == 0:          return 'negative'    def tweet_analysis(self, query, count = 10):       twitter_tweets = []       try:          get_twitter = self.api.search(q = query, count = count)          for tweets in get_twitter:             inspect_tweet = {}             inspect_tweet['text'] = tweets.text             inspect_tweet['sentiment'] = self.Sentiment_Analysis(tweets.text)                if inspect_tweet not in twitter_tweets:                   twitter_tweets.append(inspect_tweet)                else:                   twitter_tweets.append(inspect_tweet)          return twitter_tweets       except tweepy.TweepError as e:          print("Error : " + str(e)) def main():    api = Twitter_User()    twitter_tweets = api.tweet_analysis(query = 'Ram Nath Kovind', count = 200)    Positive_tweets = [tweet for tweet in twitter_tweets if tweet['sentiment'] == 'positive']    print("Positive tweets percentage: {} %".format(100*len(Positive_tweets)/len(twitter_tweets)))    Negative_tweets = [tweet for tweet in twitter_tweets if tweet['sentiment'] == 'negative']    print("Negative tweets percentage: {} %".format(100*len(Negative_tweets)/len(twitter_tweets)))    print("nnPositive_tweets:")    for tweet in Positive_tweets[:10]:       print(tweet['text'])    print("nnNegative_tweets:")    for tweet in Negative_tweets[:10]:       print(tweet['text']) if __name__ == "__main__": main() Output

Running the above code gives us the following result −

Positive tweets percentage: 48.78048780487805 % Negative tweets percentage: 46.34146341463415 % Positive_tweets: RT @heartful_ness: "@kanhashantivan presents a model of holistic living. My deep & intimate association with this organisation goes back to… RT @heartful_ness: Heartfulness Guide @kamleshdaaji welcomes honorable President of India Ram Nath Kovind @rashtrapatibhvn, honorable first… RT @DrTamilisaiGuv: Very much pleased by the affection shown by our Honourable President Sri Ram Nath Kovind and First Lady madam Savita Ko… RT @BORN4WIN: Who became the first President of India from dalit community? A) K.R. Narayanan B) V. Venkata Giri C) R. Venkataraman D) Ram… Negative_tweets: RT @Keyadas63: What wuld those #empoweredwomen b termed who reach Hon HC at the drop of a hat But Demand #Alimony Maint? @MyNation_net @vaa… RT @heartful_ness: Thousands of @heartful_ness practitioners meditated with Heartfulness Guide @kamleshdaaji at @kanhashantivan & await the… RT @TurkeyinDelhi: Ambassador Sakir Ozkan Torunlar attended the Joint Session of Parliament of #India and listened the address of H.E. Shri…

10 Powerful Applications Of Linear Algebra In Data Science (With Multiple Resources)

Overview

Linear algebra powers various and diverse data science algorithms and applications

Here, we present 10 such applications where linear algebra will help you become a better data scientist

We have categorized these applications into various fields – Basic Machine Learning, Dimensionality Reduction, Natural Language Processing, and Computer Vision

Introduction

If Data Science was Batman, Linear Algebra would be Robin. This faithful sidekick is often ignored. But in reality, it powers major areas of Data Science including the hot fields of Natural Language Processing and Computer Vision.

I have personally seen a LOT of data science enthusiasts skip this subject because they find the math too difficult to understand. When the programming languages for data science offer a plethora of packages for working with data, people don’t bother much with linear algebra.

That’s a mistake. Linear algebra is behind all the powerful machine learning algorithms we are so familiar with. It is a vital cog in a data scientists’ skillset. As we will soon see, you should consider linear algebra as a must-know subject in data science.

And trust me, Linear Algebra really is all-pervasive! It will open up possibilities of working and manipulating data you would not have imagined before.

In this article, I have explained in detail ten awesome applications of Linear Algebra in Data Science. I have broadly categorized the applications into four fields for your reference:

I have also provided resources for each application so you can deep dive further into the one(s) which grabs your attention.

Note: Before you read on, I recommend going through this superb article – Linear Algebra for Data Science. It’s not mandatory for understanding what we will cover here but it’s a valuable article for your budding skillset.

Table of Contents

Why Study Linear Algebra?

Linear Algebra in Machine Learning

Loss functions

Regularization

Covariance Matrix

Support Vector Machine Classification

Linear Algebra in Dimensionality Reduction

Principal Component Analysis (PCA)

Singular Value Decomposition (SVD)

Linear Algebra in Natural Language Processing

Word Embeddings

Latent Semantic Analysis

Linear Algebra in Computer Vision

Image Representation as Tensors

Convolution and Image Processing

Why Study Linear Algebra?

I have come across this question way too many times. Why should you spend time learning Linear Algebra when you can simply import a package in Python and build your model? It’s a fair question. So, let me present my point of view regarding this.

I consider Linear Algebra as one of the foundational blocks of Data Science. You cannot build a skyscraper without a strong foundation, can you? Think of this scenario:

You want to reduce the dimensions of your data using Principal Component Analysis (PCA). How would you decide how many Principal Components to preserve if you did not know how it would affect your data? Clearly, you need to know the mechanics of the algorithm to make this decision.

With an understanding of Linear Algebra, you will be able to develop a better intuition for machine learning and deep learning algorithms and not treat them as black boxes. This would allow you to choose proper hyperparameters and develop a better model.

You would also be able to code algorithms from scratch and make your own variations to them as well. Isn’t this why we love data science in the first place? The ability to experiment and play around with our models? Consider linear algebra as the key to unlock a whole new world.

Linear Algebra in Machine Learning

The big question – where does linear algebra fit in machine learning? Let’s look at four applications you will all be quite familiar with.

1. Loss Functions

You must be quite familiar with how a model, say a Linear Regression model, fits a given data:

You start with some arbitrary prediction function (a linear function for a Linear Regression Model)

Use it on the independent features of the data to predict the output

Calculate how far-off the predicted output is from the actual output

Use these calculated values to optimize your prediction function using some strategy like Gradient Descent

But wait – how can you calculate how different your prediction is from the expected output? Loss Functions, of course.

A loss function is an application of the Vector Norm in Linear Algebra. The norm of a vector can simply be its magnitude. There are many types of vector norms. I will quickly explain two of them:

L1 Norm: Also known as the Manhattan Distance or Taxicab Norm. The L1 Norm is the distance you would travel if you went from the origin to the vector if the only permitted directions are parallel to the axes of the space.

In this 2D space, you could reach the vector (3, 4) by traveling 3 units along the x-axis and then 4 units parallel to the y-axis (as shown). Or you could travel 4 units along the y-axis first and then 3 units parallel to the x-axis. In either case, you will travel a total of 7 units.

 L2 Norm:  Also known as the Euclidean Distance. L2 Norm is the shortest distance of the vector from the origin as shown by the red path in the figure below:

But how is the norm used to find the difference between the predicted values and the expected values? Let’s say the predicted values are stored in a vector P and the expected values are stored in a vector E. Then P-E is the difference vector. And the norm of P-E is the total loss for the prediction.

2. Regularization

Regularization is a very important concept in data science. It’s a technique we use to prevent models from overfitting. Regularization is actually another application of the Norm.

A model is said to overfit when it fits the training data too well. Such a model does not perform well with new data because it has learned even the noise in the training data. It will not be able to generalize on data that it has not seen before. The below illustration sums up this idea really well:

Regularization penalizes overly complex models by adding the norm of the weight vector to the cost function. Since we want to minimize the cost function, we will need to minimize this norm. This causes unrequired components of the weight vector to reduce to zero and prevents the prediction function from being overly complex.

You can read the below article to learn about the complete mathematics behind regularization:

The L1 and L2 norms we discussed above are used in two types of regularization:

L1 regularization used with Lasso Regression

L2 regularization used with Ridge Regression

Refer to our complete tutorial on Ridge and Lasso Regression in Python to know more about these concepts.

3. Covariance Matrix

Bivariate analysis is an important step in data exploration. We want to study the relationship between pairs of variables. Covariance or Correlation are measures used to study relationships between two continuous variables.

Covariance indicates the direction of the linear relationship between the variables. A positive covariance indicates that an increase or decrease in one variable is accompanied by the same in another. A negative covariance indicates that an increase or decrease in one is accompanied by the opposite in the other.

On the other hand, correlation is the standardized value of Covariance. A correlation value tells us both the strength and direction of the linear relationship and has the range from -1 to 1.

Now, you might be thinking that this is a concept of Statistics and not Linear Algebra. Well, remember I told you Linear Algebra is all-pervasive? Using the concepts of transpose and matrix multiplication in Linear Algebra, we have a pretty neat expression for the covariance matrix:

Here, X is the standardized data matrix containing all numerical features.

I encourage you to read our Complete Tutorial on Data Exploration to know more about the Covariance Matrix, Bivariate Analysis and the other steps involved in Exploratory Data Analysis.

4. Support Vector Machine Classification

Ah yes, support vector machines. One of the most common classification algorithms that regularly produces impressive results. It is an application of the concept of Vector Spaces in Linear Algebra.

Support Vector Machine, or SVM, is a discriminative classifier that works by finding a decision surface. It is a supervised machine learning algorithm.

In this algorithm, we plot each data item as a point in an n-dimensional space (where n is the number of features you have) with the value of each feature being the value of a particular coordinate. Then, we perform classification by finding the hyperplane that differentiates the two classes very well i.e. with the maximum margin, which is C is this case.

A hyperplane is a subspace whose dimensions are one less than its corresponding vector space, so it would be a straight line for a 2D vector space, a 2D plane for a 3D vector space and so on. Again Vector Norm is used to calculate the margin.

But what if the data is not linearly separable like the case below?

Our intuition says that the decision surface has to be a circle or an ellipse, right? But how do you find it? Here, the concept of Kernel Transformations comes into play. The idea of transformation from one space to another is very common in Linear Algebra.

Let’s introduce a variable z = x^2 + y^2. This is how the data looks if we plot it along the z and x-axes:

Now, this is clearly linearly separable by a line z = a, where a is some positive constant. On transforming back to the original space, we get x^2 + y^2 = a as the decision surface, which is a circle!

And the best part? We do not need to add additional features on our own. SVM has a technique called the kernel trick. Read this article on Support Vector Machines to learn about SVM, the kernel trick and how to implement it in Python.

Dimensionality Reduction

You will often work with datasets that have hundreds and even thousands of variables. That’s just how the industry functions. Is it practical to look at each variable and decide which one is more important?

That doesn’t really make sense. We need to bring down the number of variables to perform any sort of coherent analysis. This is what dimensionality reduction is. Now, let’s look at two commonly used dimensionality reduction methods here.

5. Principal Component Analysis (PCA)

Principal Component Analysis, or PCA, is an unsupervised dimensionality reduction technique. PCA finds the directions of maximum variance and projects the data along them to reduce the dimensions.

Without going into the math, these directions are the eigenvectors of the covariance matrix of the data.

Eigenvectors for a square matrix are special non-zero vectors whose direction does not change even after applying linear transformation (which means multiplying) with the matrix. They are shown as the red-colored vectors in the figure below:

You can easily implement PCA in Python using the PCA class in the scikit-learn package:

View the code on Gist.

I applied PCA on the Digits dataset from sklearn – a collection of 8×8 images of handwritten digits. The plot I obtained is rather impressive. The digits appear nicely clustered:

Head on to our Comprehensive Guide to 12 Dimensionality Reduction techniques with code in Python for a deeper insight into PCA and 11 other Dimensionality Reduction techniques. It is honestly one of the best articles on this topic you will find anywhere.

6. Singular Value Decomposition

In my opinion, Singular Value Decomposition (SVD) is underrated and not discussed enough. It is an amazing technique of matrix decomposition with diverse applications. I will try and cover a few of them in a future article.

For now, let us talk about SVD in Dimensionality Reduction. Specifically, this is known as Truncated SVD.

We start with the large m x n numerical data matrix A, where m is the number of rows and n is the number of features

Decompose it into 3 matrices as shown here:

Choose k singular values based on the diagonal matrix and truncate (trim) the 3 matrices accordingly:

Finally, multiply the truncated matrices to obtain the transformed matrix A_k. It has the dimensions m x k. So, it has k features with k < n

Here is the code to implement truncated SVD in Python (it’s quite similar to PCA):

View the code on Gist.

On applying truncated SVD to the Digits data, I got the below plot. You’ll notice that it’s not as well clustered as we obtained after PCA:

Natural Language Processing (NLP)

Natural Language Processing (NLP) is the hottest field in data science right now. This is primarily down to major breakthroughs in the last 18 months. If you were still undecided on which branch to opt for – you should strongly consider NLP.

So let’s see a couple of interesting applications of linear algebra in NLP. This should help swing your decision!

7. Word Embeddings

Machine learning algorithms cannot work with raw textual data. We need to convert the text into some numerical and statistical features to create model inputs. There are many ways for engineering features from text data, such as:

Meta attributes of a text, like word count, special character count, etc.

NLP attributes of text using Parts-of-Speech tags and Grammar Relations like the number of proper nouns

Word Vector Notations or Word Embeddings

Word Embeddings is a way of representing words as low dimensional vectors of numbers while preserving their context in the document. These representations are obtained by training different neural networks on a large amount of text which is called a corpus. They also help in analyzing syntactic similarity among words:

Word2Vec and GloVe are two popular models to create Word Embeddings.

I trained my model on the Shakespeare corpus after some light preprocessing using Word2Vec and obtained the word embedding for the word ‘world’:

Pretty cool! But what’s even more awesome is the below plot I obtained for the vocabulary. Observe that syntactically similar words are closer together. I have highlighted a few such clusters of words. The results are not perfect but they are still quite amazing:

There are several other methods to obtain Word Embeddings. Read our article for An Intuitive Understanding of Word Embeddings: From Count Vectors to Word2Vec.

8. Latent Semantic Analysis (LSA)

What is your first thought when you hear this group of words – “prince, royal, king, noble”? These very different words are almost synonymous.

Now, consider the following sentences:

The pitcher of the Home team seemed out of form

There is a pitcher of juice on the table for you to enjoy

The word ‘pitcher’ has different meanings based on the other words in the two sentences. It means a baseball player in the first sentence and a jug of juice in the second.

Both these sets of words are easy for us humans to interpret with years of experience with the language. But what about machines? Here, the NLP concept of Topic Modeling comes into play:

Topic Modeling is an unsupervised technique to find topics across various text documents. These topics are nothing but clusters of related words. Each document can have multiple topics. The topic model outputs the various topics, their distributions in each document, and the frequency of different words it contains.

Latent Semantic Analysis (LSA), or Latent Semantic Indexing, is one of the techniques of Topic Modeling. It is another application of Singular Value Decomposition.

Latent means ‘hidden’. True to its name, LSA attempts to capture the hidden themes or topics from the documents by leveraging the context around the words.

I will describe the steps in LSA in short so make sure you check out this Simple Introduction to Topic Modeling using Latent Semantic Analysis with code in Python for a proper and in-depth understanding.

First, generate the Document-Term matrix for your data

Use SVD to decompose the matrix into 3 matrices:

Document-Topic matrix

Topic Importance Diagonal Matrix

Topic-term matrix

Truncate the matrices based on the importance of topics

View the code on Gist.

For a hands-on experience with Natural Language Processing, you can check out our course on NLP using Python. The course is beginner-friendly and you get to build 5 real-life projects!

Computer Vision

Another field of deep learning that is creating waves – Computer Vision. If you’re looking to expand your skillset beyond tabular data (and you should), then learn how to work with images.

This will broaden your current understanding of machine learning and also help you crack interviews quickly.

9. Image Representation as Tensors

How do you account for the ‘vision’ in Computer Vision? Obviously, a computer does not process images as humans do. Like I mentioned earlier, machine learning algorithms need numerical features to work with.

A digital image is made up of small indivisible units called pixels. Consider the figure below:

This grayscale image of the digit zero is made of 8 x 8 = 64 pixels. Each pixel has a value in the range 0 to 255. A value of 0 represents a black pixel and 255 represents a white pixel.

Conveniently, an m x n grayscale image can be represented as a 2D matrix with m rows and n columns with the cells containing the respective pixel values:

But what about a colored image? A colored image is generally stored in the RGB system. Each image can be thought of as being represented by three 2D matrices, one for each R, G and B channel. A pixel value of 0 in the R channel represents zero intensity of the Red color and of 255 represents the full intensity of the Red color.

Each pixel value is then a combination of the corresponding values in the three channels:

In reality, instead of using 3 matrices to represent an image, a tensor is used. A tensor is a generalized n-dimensional matrix. For an RGB image, a 3rd ordered tensor is used. Imagine it as three 2D matrices stacked one behind another:

10. Convolution and Image Processing

2D Convolution is a very important operation in image processing. It consists of the below steps:

Start with a small matrix of weights, called a kernel or a filter

Slide this kernel on the 2D input data, performing element-wise multiplication

Add the obtained values and put the sum in a single output pixel

The function can seem a bit complex but it’s widely used for performing various image processing operations like sharpening and blurring the images and edge detection. We just need to know the right kernel for the task we are trying to accomplish. Here are a few kernels you can use:

View the code on Gist.

You can download the image I used and try these image processing operations for yourself using the code and the kernels above. Also, try this Computer Vision tutorial on Image Segmentation techniques!

Amazing, right? This is by far my most favorite application of Linear Algebra in Data Science.

Now that you are acquainted with the basics of Computer Vision, it is time to start your Computer Vision journey with 16 awesome OpenCV functions. We also have a comprehensive course on Computer Vision using Deep Learning in which you can work on real-life Computer Vision case studies!

End Notes

My aim here was to make Linear Algebra a bit more interesting than you might have imagined previously. Personally for me, learning about applications of a subject motivates me to learn more about it.

Related

Best Mouse For Programming & Coding (2023 Update)

We are reader supported and may earn a commission when you buy through links on our site

27 MOUSE Tested

210+ Hours of Research

2k+ Reviews Examined

Unbiased Reviews

Programmers spend a lot of time in front of PC and develop Repetitive Strain Injuries due to long hours of mouse usage. A standard mouse will only aggravate such injuries. A mouse that puts your hand in a more neutral position is perhaps the best way to alleviate these problems – enter vertical/trackball mice. With a plethora of choice in the market, a coder could be confused to select the top mice for his/her needs. This guide should help.

Best MOUSE for Programming: Top Picks Best MOUSE for Programming & Coding

Logitech MX Master 3 Advanced Wireless Mouse is the ultimate ultra-fast mouse that enables us to work with precision. It offers has app-specific customization that can speed up our workflow with predefined profiles.

The mouse gives you a seamless experience while working on three computers. It can be used for transfer cursor, files, and text between two operating systems, desktop and mouse. This Logitech mouse works nicely on any surface, even on glass, having a 4000 DPI sensor.

Key Specifications

Weight: 141 Grams

Color: Black

DPI: 200-8000 DPI

Supported Devices: Laptop, Desktop

👍 Pros 👎 Cons

Provides easy connectivity. Bluetooth fails when you work consistently.

It is extremely comfortable.  

The mouse is very lightweight.  

It is very accurate.  

Value for money.  

#2: Logitech MX ERGO Trackball – High Tech Mouse for People with Carpal Tunnel Syndrome

Logitech MX ERGO Trackball: It takes time understanding the trackball concept unless you get a hold of one. Logitech’s ERGO lets you roll the ball with your thumb to move the mouse pointer across the screen.

It takes time understanding the trackball concept unless you get a hold of one. Logitech’s ERGO mouse lets you roll the ball with your thumb to move the mouse pointer across the screen.

What this means is that you can move the cursor with little to no arm displacement. This is great for people suffering from wrist/joint pains as the only part of your arm that you have to move your thumb. It also means you can use the mouse on basically any surface, however, cluttered, uneven or slippery it is. Do you want to comfortably control your smart TV while sitting on the couch with the mouse on your leg? Well, now you can.

The ERGO comes with a magnetic hinge, which can be adjusted to tilt the mouse from horizontal to a maximum of 20 degrees. This allows you to set the tilt of the mouse to match your taste and comfort. The 20-degree maximum means that it is not strictly vertical, but the dimensions of the mouse along with the trackball based operation should make it comfortable enough for almost anyone to use.

The mouse has 8 customizable buttons and is Logitech FLOW enabled. What Logitech means by the FLOW is the ability to connect to up to 2 devices simultaneously, allowing you to switch control between the two different systems seamlessly. You connect the mouse to one device with the Unifying USB connector, and to the other by Bluetooth. There is an ‘Easy Switch’ button just below the scroll wheel that lets you switch between two computers with ease.

The mouse is a bit heavier than the M570, but it is made of soft rubber that allows a nice comfortable grip. When I tried the MX ERGO, it felt solid and fit my hand well. Logitech’s MX ERGO trackball is compatible with Windows and Mac.

There is a USB receiver, but the bottom does not have a compartment to store it. This should not be an issue though because the receiver is small enough to leave in your laptop. The model also connects via Bluetooth.

The trackball has a high-speed mode and a high-precision mode, which you can switch between at the press of a button. The scroll wheel tilts to the sides too, letting you perform horizontal scrolling. That is something you might not realize how much you’ve missed until you’ve used it.

Like other ergonomic mice, the MX ERGO has the forward and back buttons. There is also a button included next to the trackball that lets you change the DPI on the mouse.

ERGO comes with rechargeable batteries that last about 4 months.

You may not see the benefits of purchasing Logitech’s MX ERGO Trackball if you are not a heavy computer user. However, if you work long hours a day on your computer, this is the mouse for you.

There’s little to dislike about the overall performance of the MX ERGO, but for me, the fact that it does not have a left-hander’s version is a big disappointment.

Key Specifications

Weight: 140 Gram

Color: Black

DPI: 512 – 2048 DPI

Supported Devices: ‎Personal Computer

👍 Pros 👎 Cons

Feels comfortable in the hand No version for left-handers

Tilting stand lets you try different comfortable angles A bit heavy

Precision mode helps when making fine adjustments No Official Linux Support

Scroll in vertical as well as horizontal direction with the trackball  

Excellent hardware quality  

Rechargeable Battery that holds power for long on a full charge  

#3

Tobo Vertical Mouse

4.7

Number of Buttons: 6

Movement Detection Technology: Optical

Special Feature: Rechargeable, Wireless

Product Dimensions: 11.68 x 6.86 x 9.91 cm

Check on Amazon

Tobo’s auto-sleep feature that puts the mouse in sleep after 8 minutes of inactivity. You’ll have to press the left or right button to wake it up when you leave it idle for more than eight minutes. It is worth mentioning that the Tobo Vertical Mouse comes with a convenient slot underneath to store the USB dongle. The mouse is close to 100gms in weight and is easy to maneuver.

Even though this mouse packs some great features that let me surf the internet comfortably, there are still a few things that could be improved. For one, I was not impressed with its scroll wheel. Yes, it works fine, but it feels a bit flimsy and difficult to control.

I should also point out that the back and forward buttons are uncomfortable to use at times. After comparing it to other models within this price range, the Tobo Vertical Mouse is a solid option if you want the best ergonomic mouse that won’t dent your wallet.

Key Specifications

Weight: 150 g

Color: Black

DPI: (800 / 1200 / 1600DPI)

Supported Devices: ‎‎Laptop, Personal Computer

👍 Pros 👎 Cons

Comfortable for those with large or smaller hands The scroll wheel does not feel solid

Multiple tracking options and vertical orientation Buttons are a bit rigid

Wireless

Power saving

Works only on Windows, Mac, and Linux

#4: TECKNET – Best 2.4G Wireless Mouse Portable

TECKNET is a lightweight wireless mouse that comes with easy to connect USB receive. The most remarkable feature of this type of mouse is that it has 5 adjustable DPI levels and 6 buttons for PC, notebook, and more. It has a comfortable ergonomic design and comes with varieties of vibrant colors.

The battery of this mouse retains up to 15 months, so you do not need to worry about changing it often. It can be used without any need of a driver, just a plug, and we are ready to work. This enables to change the cursor sensitivity based on your activity.

TECKNET offers wide compatibility options with Linux and the latest versions of Windows. It perfectly fits for laptop, desktop, MacBook, PC, and other devices. The mouse turned to a power-saving mode if it is not used for more than 8 minutes. This mouse is the right choice for the people who want to work for long hours.

Key Specifications

Weight: 75 Grams

Color: Black, Blue,Coffee, Grey

DPI: 2000/1500/1000DPI

Supported Devices: ‎‎‎Laptop, Personal Computer, Smartphone

👍 Pros 👎 Cons

Good for work and home. It does not fit on medium-sized hand well.

You can use it on any surface.  

Simple and easy to use.  

The mouse buttons and wheels are perfectly responsive.  

#5: Logitech M510 – Best to avoid Wrist Pain

What this means is that you can move the cursor with little to no arm displacement. This is great for developers who work long hours on the PC. It also means you can use the mouse on basically any surface, however, cluttered, uneven or slippery it is. Do you want to comfortably control your smart TV while sitting on the couch with the mouse on your leg? Well, now you can.

I was impressed with the design of the M510. My hand curved around the body comfortably and it felt so natural. Just like the traditional gaming mouse, the M510 has two mouse customizable buttons and a rubbery scroll wheel in between. The wireless trackball also has forward and back buttons on the right for easy internet navigation. The quick navigation programmble buttons seem to be too far from my fingers. I had to move my hand to reach them. This may not be an issue if you have large hands.

At the bottom, there is an on/off button and a removable cover that lets you access the battery compartment and a section for holding the USB wireless adapter. This receiver is quite small that you can always leave it on your laptop without worrying about breaking it.

The M510 is powered by a single AA battery, which is included in the box (yay!!). The battery is expected to last you around 18 months, and a warning light will inform you when it is about to run out. The best part is there is a SetPoint Software that lets you customize the mouse settings depending on your preference.

The Logitech Wireless M510 provides reliable performance and has a set of features that makes it ideal for people with wrist pain and carpal tunnel syndrome. People of all hand sizes should find the M510 reasonably comfortable, as the dimensional aesthetics of the mouse ensures its versatility in the matter. At 140 grams, the optical mouse feels solid and provides the user with a quiet confidence of quality.

Key Specifications

Weight: 168 Grams

Color: Graphite

DPI: 1000 DPI

Supported Devices: ‎‎‎‎Personal Computer

👍 Pros 👎 Cons

Can be operated with minimal hand movement Trackball needs to be occasionally cleaned

Long battery life No left-handed version

Long wireless range Works only on Windows and Mac. No Linux Support

Works well on any surface  

#6: Amazon Basics – Best Mouse for Coders using Laptop

Let me first point out that this mouse has a compact size and is super lightweight. You might want to consider it before your next trip. Amazon Basics mimics the feel of a pen rather than that of a mouse. The Amazon Basics wireless fit between my thumb and middle finger, so it felt like I was holding a pen rather than a mouse.

Because the index finger rests on top, you can use it to work the scroll wheel. It surprisingly allows smooth operation when running different programs. It works seamlessly across all operating platforms so whether you have a Windows or an iOS device, this is a mouse you’d want to settle with. Setting the mouse with any device is a walk in the park. The procedure is as simple as plugging in the dongle in the USB port or use Bluetooth.

The mouse does have gesture-based controls that can be used for easy executions of functions such as scrolling or swiping. This mouse is quite comfortable and fits the hand well. You can move the mouse with your fingers only. Of course, there are times when you’ll have to move your wrist when making major adjustments, but overall, this might be the best ergonomic mouse to prevent carpal tunnel syndrome.

The battery life is fantastic; a 30-second charge should give you at least an hour of use. The Amazon Basics worked well on the top of my laptop. You can comfortably use it on your desk or table, but its small size makes it perfect on top of your laptop. Sure, it sells on the higher end, but if you don’t mind throwing in a few extra dollars for an ergonomic device, this is a mouse to consider.

Key Specifications

Weight: 68 Grams

Color: Black

DPI: 1600 DPI

Supported Devices: Laptop, Desktop

👍 Pros 👎 Cons

Small and portable design Expensive

Rechargeable and allows hours of use

Gesture-based controls

Functions across Windows, Mac, Linux, iOS, Andriod

Wireless and Bluetooth

#7: Microsoft Sculpt Ergonomic Mouse – Best Mouse for Windows Users

#7

Microsoft Sculpt Ergonomic Mouse

4.4

Number of Buttons: 7

Movement Detection Technology: Optical

Special Feature: Wireless

Product Dimensions:‎ 9.8 x 6.2 x 3.5 cm

Check on Amazon

The mouse is large, round and taller so your hands and fingers will be in a different position from the traditional mouse. There is also a thumb scoop where you can comfortably rest your thumb when using the mouse. When it comes to comfort, the Sculpt truly delivers. If you are a windows user, you might want to check out the features of this mouse.

The big blue Windows Logo button caught my attention at first sight. This button gives you instant access to the start menu. While the mouse works with Mac devices, this button is specially designed for Windows and may be irritating for Mac users because it cannot be disabled or remapped. The Sculpt proves to be a well-designed and solid mouse for those who want a basic ergonomic mouse that is wireless.

There is a small button beneath the Windows button that serves as a ‘back’ button. It’s unfortunate that there is no forward button considering there is a lot of space in this section. The size of the USB dongle is also a bit disappointing. The USB plug sticks out a few inches on the side of my laptop. This is a minor issue but something to keep in mind, so you don’t accidentally break it. There is, however, a storage compartment underneath to keep it when it is not in use.

Key Specifications

Weight: 339 Grams

Color: Black

DPI: 1000 DPI

Supported Devices: Laptop

👍 Pros 👎 Cons

Wireless No Bluetooth feature

Looks stylish and smooth USB receiver protrudes a few inches

Tracks on a variety of surfaces Non-customizable Windows Start-menu button

Affordable

Easy to setup

FAQs

Chose Wireless Mouse as they do not restrict the movement of your hands due to cables

Go for Vertical or Trackball but not the standard mouse

Ensure to choose Left or Right handed version. Very few models on the market are designed for left-handed people. Jelly Comb Left Handed Mouse could be considered

Extra Mouse Buttons, Sensors, Size, Grip are other essential factors to consider while making your buying decision.

You may have the best programming mouse on the market, but your programming productivity will suffer if you do not use it right. A few tips I can give are:

When using a standard type of mouse, you twist your arm to adjust to the mouse. The twisting strain your wrist leaving it numb and stiff and causing Repetitive Strain Injury (RSI). Medically speaking, Repetitive Strain Injury is a cumulative trauma disorder that stems from prolonged repetitive hand movements. RSI damages the muscles and tendons of your hand, forearm shoulder and neck as shown below:

To use a standard mouse, you twist your wrist continually (pronation) which causes stress to the tendons. Your wrist is not meant to go through such strain for long.

The neutral (handshake) position is the best way to use your mouse. This is where an ergonomic comes in. Instead of twisting your arm to adjust to the mouse, ergonomic mice are designed to adjust to your arm. The neutral position requires less strength, which in turn helps release tension from the tendons. Using a vertical ergonomic mouse transfers energy from the wrist to stronger muscles in the upper arm.

Best MOUSE for Programmers

Update the detailed information about Linear Programming & Discrete Optimization With Pulp on the Hatcungthantuong.com website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!