Trending December 2023 # How To Find Out If Someone Was Snooping Around On Your Computer # Suggested January 2024 # Top 21 Popular

You are reading the article How To Find Out If Someone Was Snooping Around On Your Computer updated in December 2023 on the website We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested January 2024 How To Find Out If Someone Was Snooping Around On Your Computer

Someone maybe snooping around on your computer, and that is definitely a problem. In many cases, the person who is accessing your computer is likely one who is known such as a family member or friend. In other situations, a colleague at work might have gained access if you had left your laptop unattended for a period of time.

How to find out if someone was snooping around on your computer?

The question is, how can we find out if this has happened for sure. The first step is knowing where to begin, and that is something we plan to discuss in this article.

Bear in mind that a trace of almost all actions taken on your computer is stored, which means, there are ways to tell if someone has been messing around without your consent. Now, nothing here will determine who the culprit is, but this should give an idea:

Check for newly installed apps

Check your web browser history

Check Quick access

Take a look at Windows 10 Logon Events

Turn on logon auditing on Windows 10 Pro

One of the best safe computing habits to cultivate is to lock the screen of the computer with a password when you are not at it. It hardly takes a moment. You just have to press WinKey+L to lock the computer. This prevents others from snooping into your computers when you are not around.

1] Check for newly installed apps

From here, you should see the latest apps that were installed recently. If none were done by you, then chances are a third-party might have been playing around with your computer.

Read: How to avoid being watched through your own Computer.

2] Check your web browser history

In several cases, a person who uses your computer without consent might decide to use the web browser or whatever reasons. With this in mind, we suggest checking your web browser history just in case the culprit did not delete evidence of their transgressions.

If you have multiple web browsers, then check the history of each and not just the one you use on a regular basis.

Most browsers support pressing Ctrl+H keys to open the browsing history panel.

Read: Find out if your online account has been hacked and email & password details leaked.

3] Check Quick access

For those who had no idea, well, let us make it clear that Windows 10 makes it possible for folks to check the recent user activity.

For example, you can open Microsoft Word to check if any files have been modified. The same goes for Excel, PowerPoint, or any other tools that fall under Microsoft Word.

Additionally, press the Windows key + E to open the File Explorer. From the top of the menu, look for Quick Access, and select it.

Right away, you should see a list of recently added or modified files. Check if anyone of them were modified by you in order to find out if another has accessed the device.

Read: How do I know if my Computer has been Hacked?

4] Take a look at Windows 10 Logon Events

You should also look for 4672 with the Task Category name, Special Logon. We suggest looking for “4634”, which suggests someone turned off your computer; 4624 means Logon, 4634 means Logoff.

Read: Safe Computing Tips, Practices and Habits for PC users.

5] Turn on logon auditing on Windows 10 Pro

Here’s the thing, this feature is automatically up and running in Windows 10 Home, but when it comes down to the Pro version, you may have to manually enable it.

Let us know if you have any tips.

You're reading How To Find Out If Someone Was Snooping Around On Your Computer

How To Reset Your Computer Password If You Lock Yourself Out

Being the administrator of a computer or computer network is important. You’re the person whom everyone goes to when they lock themselves out of their computer or need a new piece of software added to their accounts. 

But what can you do if you get locked out yourself? Although a little embarrassing, it can happen to anyone. Everyone has misplaced their keys at some point or another making getting back into your car or home, a real pain.

Table of Contents

What you’d normally do is call a professional to handle this sort of thing. They can get you back into whatever you’ve locked yourself out of, as long as you’re willing to pay for the service.

When it comes to getting yourself locked out of your admin account, the same resolution is possible. However, there is also a way to do away with the need for involving someone else – and the cost – and simply do it yourself instead by resetting your password.

How To Reset Your Computer Password 

The key to “hacking” your way back into your admin account is doing it in a way that causes little to no harm to your files and data stored on the computer. The process should also be as painless as possible and not cause too much of a headache.

Remember that this is for administrator account restoration for forgetting your password, or being locked out from an incorrect password input. These are not techniques to be used if your computer is facing a virus or has been hacked by an outside entity. 

Windows (10, 8.1, 7, Vista, & XP)

In the past, we’ve touched on a similar subject specifically for Windows 7 and 8.1. With this new method, we’ll be looking at free utilities like Trinity Rescue Kit (TRK) or MediaCat USB. 

We’ll be focused on TRK for this tutorial as it is one of the best utilities out there to reset your admin password. Though if you happen to be dealing with Linux instead of Windows, MediaCat USB can help with that.

TRK can help you recover more than just the administrator password for your computer. It also provides help in the recovery of files, evacuating a faulty or dying disk, and can scan for rootkit malware, as well as any other disaster recovery tasks you might need.

You’ll need to load TRK to either a CD/DVD or USB drive as it will need to run prior to Windows loading. Navigate to the official site and download the program.

Once downloaded, either burn it to CD/DVD or move it over to the USB drive. If you currently have a blank CD in your CD burner, TRK will detect this and ask if you’d like to proceed with burning the program to a CD.

Prior to loading TRK, ensure that you head into your computer’s BIOS (or UEFI) and have it set to boot from USB/CD/DVD. Not doing so will cause the computer to boot as normal and bypass the TRK utility.

Getting into your BIOS will usually require holding down a key like F12 while your computer is restarting. You’ll need to refer to the manual that came with the motherboard or computer to determine how yours is set up.

The chances are rather good that if you have Windows 10, you’re using UEFI. Some Windows 8 machines are also UEFI, and you’ll need to determine if your computer does before proceeding. 

To get into UEFI, hold down the correct hotkey during restart, similar to BIOS. There are other methods, but they will require that you are logged into Windows. Seeing as the whole purpose of this article is you not having access to Windows, those other methods won’t help at this time.

Once you’ve booted up the program, you’ll be presented with the TRK 3.4 splash screen. Select Run Trinity Rescue Kit 3.4 (default mode, with text menu) and press Enter.

From the simple menu, arrow down to Windows password resetting and press Enter.

Arrow down again until you highlight Reset password on built-in Administrator, and press Enter again.

Locate the section Windows NT/2K/XP. Underneath, make a note of the number found next to the Windows folder. Enter that number into the prompt and press Enter.

Enter 1 under the User Edit Menu, and press Enter. This will remove the password set for Administrator. You can press any key to continue.

Again, use the arrow keys to highlight Main Menu, and press Enter.

For the final time, use the arrow keys to highlight Poweroff computer, and press Enter. You can now eject your CD/DVD or USB so that the boot can return to normal.

Allow Windows to boot up, then log into the Administrator account while leaving the Password portion blank.

For Mac OS X 10.4 – 10.6 (Tiger, Leopard, and Snow Leopard)

The methods to reset your password for older versions of Mac OS X are a bit simpler than that of the Windows operating system – as long as you still have the OS X DVD that came with the computer or OS X upgrade.

For Mac OS X 10.7+ (Lion and Above)

Newer Mac OS X, or MacOS, is even easier than past versions. No more need for a disk as everything is right there in the operating system to reset your password.

Restart the computer and hold down the ⌘ + R keys once the restart begins. You’ll have to continue holding down the keys until the Apple logo appears on-screen.

After the startup sequence is completed, you should have the Recovery HD utility window on-screen.

Open a terminal window while in the utility, and type resetpassword, then press Enter.

How To Tell If Your Computer Is 32 Or 64 Bit On Windows 11

Knowing which version of Windows you’ve installed is a handy bit of information that will help you install the right software versions, device drivers, and let you know whether or not your system is capable of running the latest iteration of Windows. 

With that in mind, here is everything related to CPU and OS architectures for Windows 11, and how you can check if your computer is 32-bit or 64-bit.

How to check computer architecture on Windows 11

Before we begin, let’s talk about the relationship between CPU and OS architectures. Everything begins and ends with the architecture of your processor. If you have a 32-bit processor, you can only install a version of Windows built specifically for that. On the other hand, if you have a 64-bit processor, you can have either the 32-bit or the 64-bit version of Windows. 

Related: How to Disable Updates on Windows 11

As such, knowing which computer architecture you have is important if you don’t want compatibility issues with your applications and device drivers. Below are all the ways that you can check your computer architecture.

Method #01: Check Device Specifications through Settings

One of the simpler ways to check your CPU architecture is via the Settings app. Here’s how to do so:

Then, under ‘Device Specifications’, look for System type. The architecture of both your OS and your processor will be listed next to it.

Related: How to Search in Windows 11

Method #02: Check System Information

Windows has had a ‘System Information’ app ever since the days of XP that gives you all the information that you could need about your system. Here’s how to use it to check whether your computer is 32 – or 64 – bit:

Related: How to Show Hidden Files on Windows 11

Method #03: Check the ‘Program Files’ folder

64-bit versions of Windows can only run on systems that have a x64-bit architecture. But they can install both 32-bit and 64-bit software programs. This is why a 64-bit computer will have two ‘Program Files’ folders – Program Files and Program Files (x86). 32-bit versions of Windows, on the other hand, can only install 32-bit programs, and therefore, only have a single ‘Program Files’ folder.

So, if you quickly want to know whether you have a 32-bit or 64-bit computer, simply go to the C: drive (default system drive) and check for the ‘Program Files’ folder(s). If there are two, you have a 64-bit computer. If one, then 32-bit.

Method #04: Check System Info in Command Prompt

Information about the system can be easily extracted from terminal applications like the Command Prompt and PowerShell. Here’s how you can find out your computer’s architecture from the Command Prompt:

Then type the following command:


Press Enter. Information about your computer’s architecture will be mentioned next to ‘System Type’.

Method #05: Check OS architecture in PowerShell

To check your OS architecture in PowerShell, follow the steps below:

Then type the following command:

wmic os get OSArchitecture

PowerShell will highlight your OS architecture in the next line as such:

Keyboard shortcut to check computer architecture

To check if your system type is 32-bit or 64-bit, press the Windows key and ‘Pause’ or ‘Break’ button simultaneously (Win + Pause). If you have a built-in keyboard (for laptops), you may have to press the Function key to get the Pause button (Win + Fn + Pause). 

This will open the ‘About’ page in the Settings app where you’ll be able to find your system architecture next to ‘System type’.

Frequently Asked Questions (FAQs):

Let’s take a look at a few commonly asked questions about computer and operating system architecture.

What is the difference between 32-bit and 64-bit versions of Windows?

Before we begin to list out the differences between the 32-bit and 64-bit versions of Windows, the most obvious question to ask would be – what does 32 or 64 even stand for? For computer processors, this is the width of the CPU register. 

The CPU register holds a small bit of storage space for whenever it needs to access data quickly. A 32-bit CPU register can hold up to 2³² entries (and thus can only access 4GB of RAM), whereas a 64-bit CPU register holds up to 2⁶⁴ entries. Clearly, 64-bit processors trump their predecessors by a huge margin when it comes to speed and performance. 64-bit processors are also much safer than their 32-bit cousins. 

This difference also necessitates developers to create two different versions of their apps and software, one for 32-bit and another for 64-bit. Such is the case for Windows as well. However, things have started to change with Windows 11.

Does Windows 11 support 32-bit processors?

Microsoft has clearly stated that one of the minimum requirements to run Windows 11 is to have a 64-bit processor. This also means that if you’ve already got Windows 11, you can rest assured knowing that you have a x64-bit processor.

Starting with Windows 11, Microsoft will no longer release 32-bit builds for OEM distribution either. Basically, 32-bit CPUs do not meet either the hard or the soft requirement for Windows 11 and if you want to transition to the latest iteration of Windows, you will have to upgrade your PC. But if you’re on a previous version of 32-bit Windows and don’t want to upgrade, don’t worry. You will continue to get updates and security features for your 32-bit Windows 10 system.

Can I install a 64-bit version of Windows on a 32-bit CPU?

No, you cannot have a 64-bit version of Windows on a 32-bit CPU. Only a 32-bit version of Windows can be installed on a 32-bit CPU. On the other hand, backward compatibility is possible and you can have a 32-bit version of Windows on a 64-bit CPU. 

Can I upgrade from 32-bit to 64-bit Windows?

Yes, you can upgarde from a 32-bit version to a 64-bit version of Windows, but only if you have a 64-bit processor. The only way to do so is to do a clean installation of a 64-bit version of Windows.

Whether you want to upgrade your PC or install the appropriate version of drivers and software, knowing which CPU and Windows architecture your system has is an important bit of information. Given the widespread use of 64-bit processors these days, you are most likely to have a 64-bit OS (especially so if you’re running Windows 11). 


How To Stream Music All Around Your House

We may earn revenue from the products available on this page and participate in affiliate programs. Learn more ›

Thanks to the wonders of Wi-Fi and Bluetooth, getting your tunes pumping in every room of the house no longer has to be a complicated or expensive process. If you want home-filling audio, you’ve got a variety of choices when it comes to hardware and technology—and we’ll lay them out here.

This is not an exhaustive list, but these are some of the most popular and least expensive options worth considering. You’ve also got the option to combine some of these speaker setups together, so you don’t need to feel locked into one system or provider.

Sonos Play speakers

Sonos has been setting the standard for wireless speakers. Sonos

Sonos is generally considered to be the gold standard in terms of wireless home audio, offering a range of excellent-sounding speakers that can work independently or as a group. What makes Sonos systems such a breeze to use is that these speakers connect directly to your router and the web—they’re always ready to play, so they don’t require a tedious connection process every time you want to play some tracks.

That said, the music is controlled via apps on your phone or computer, but you can seamlessly jump between them as needed. Supporting music services include Spotify, Google Play Music, Apple Music, Amazon Music, Deezer, Microsoft Groove and many more. It can also play music you’ve got stored on your computers, phones or tablets as well.

To add to the appeal, the Sonos app lets you associate speakers with the rooms they’re in, so you can seamlessly pick up your listening as you move from the kitchen to the living room, or have different tunes playing in different bedrooms. Sonos systems are simple to set up and sound great, but the tradeoff comes in the high price tags, which can push the cost of whole-house setup well over $1,000.

Sonos Play speakers start at $199.

Bose SoundTouch speakers

Bose SoundTouch offers similar features to the Sonos range. Bose

Bose has its own Sonos rival in the form of the SoundTouch speaker system, and a lot of the features are the same across the two platforms. The Bose kit can tap into services such as Spotify, Deezer and Amazon Music straight from the web, but (unlike Sonos) the speakers can also be connected to via Bluetooth if that’s what you prefer. That’s a plus if you have guests over who want to stream music from their devices.

The SoundTouch apps are available for computers and mobile devices, like the Sonos ones, and again you can assign your speakers to separate rooms and have different playlists on the go in each. All of the products in the SoundTouch lineup can be controlled through the app and integrated into a system, so expanding your audio arsenal in the future is straightforward, albeit a little on the pricey side.

Bose includes separate remotes with the SoundTouch speakers and the larger ones can operate independently too by connecting up to online radio stations. You can even add some SoundTouch smarts to your existing speakers with the SoundTouch Wireless Link adapter, which plugs straight into any speaker with an optical, RCA or AUX port available.

Bose SoundTouch speakers start at $199.95.

Google Home and Google Cast

Google Home gives you wireless audio and a smart assistant. Google

It’s fair to say that Google Home is a smart speaker first and a music speaker second, but the audio it pumps out is pretty decent, if not quite up to the level of Sonos and Bose. Google Home comes with Google Cast built-in, which is Google’s Chromecast-style tech for wirelessly beaming audio from phone apps and other sources to compatible speakers.

That means you can either ask the Google Assistant directly to play you some music, or cast it across through the apps for Google Play Music, Spotify, Deezer, Plex, TuneIn and many more. Of course the Google Assistant can read out your calendar and warn you about the traffic as well, so it’s a more versatile solution than the Sonos and Bose options.

It largely depends if you value audio quality or a smart assistant more. Bluetooth support has just been added to Google Home to give you another option, as has multi-user support, and if you buy more than one Google Home speaker you can set them up in different rooms with different playlists streaming, or join speakers together as a group.

The Google Home costs $129 direct from Google.

Amazon Echo

The Echo can stream music over the web or via Bluetooth. Amazon

Like Google Home, the Amazon Echo gives you a trade-off: The audio quality isn’t as top-notch as it is on other products (though it’s not bad), but you do have a smart assistant to talk to with a list of commands that runs into the thousands. You also have the option of buying an Echo Dot to add some extra smarts to the speakers you’ve already got installed.

In terms of music support, the big ones that the Echo works with directly (via an Alexa command) are Spotify, TuneIn and of course Amazon Music. However, as the Echo is equipped with Bluetooth, you can basically stream audio from any app on your phone or tablet to the Echo. It’s not the most elegant of solutions, but it works perfectly well.

Unfortunately you don’t get true multi-room support with the Echo, the kind where you can group Echos together or associate them with a room. That said, nothing is stopping you from buying several Echos and sticking them in different rooms — it’s just that you don’t get any house-wide syncing or organizing, though that could feasibly be a feature that’s added later.

The original Amazon Echo costs $179.99 direct from Amazon.

Apple HomePod and AirPlay

The Apple HomePod ships in December. Apple

The Apple HomePod is the newcomer to the party, and as it doesn’t launch until December 2023 we can’t tell you much about it except what Apple has told us. On paper, it looks like a stylish mix of Sonos-style audio quality and a built-in smart assistant app, though it’s priced to match—the starting cost is significantly higher when compared with rival devices.

The HomePod can adjust its sound output to the room and position it’s been placed in, and can automatically link to another HomePod in the same room, Apple promises. On top of that, the new generation of Apple’s wireless audio technology, AirPlay 2, lets you listen through multiple HomePods in multiple rooms, if you’ve got the cash to afford several.

This being Apple though, the HomePod only works with Apple Music, and you’ve got no Bluetooth streaming to fall back on either. If you only use Apple-compatible kit then the HomePod and AirPlay 2 (and indeed AirPlay 1) will be a great solution for multi-room streaming, but everyone else is likely to find the music restrictions rather limiting.

The HomePod will cost $349 direct from Apple.

Computer Vision And How It Is Shaping The World Around Us

This article was published as a part of the Data Science Blogathon

Since the initial breakthrough in Computer Vision achieved by A. Krizhevsky et al. (2012) and their AlexNet Network, we have definitely come a long way. Computer Vision has since been making its way into day-to-day human lives without even us knowing about it. The one thing Deep Learning Algorithms need is data, and with the progress in portable camera technology in our mobile devices, we have it. A lot more, and a lot better. With great data, comes great responsibility. Data Scientists and Vision Engineers have been using data to create value in the form of awesome Vision applications.

Computer Vision has found applications in very diverse and challenging fields and these algorithms have been able to assist, and in some cases, outperform human beings. Be it Medical Diagnosis (Biology), Production Automation (Industry), Recommender Systems (Marketing), or everyday activities like driving or even shopping, Vision Systems are everywhere around us. In this blog, I am going to discuss some applications of computer vision, and how companies are implementing scalable vision systems to solve problems and generate value for their customers.

A timeline of some seminal computer vision papers against milestone events in computer vision product applications

Self Driving Vehicles


Tesla uses 8 cameras on the vehicle to feed their models, and the models do pretty much everything that can be done using video data, to guide the vehicle. The granular sub-applications that Tesla Autopilot needs to function are:

Detection and Recognition of Objects (Road Signs, Crosswalks, Traffic Lights, Curbs, Road Markings, Moving Objects, Static Objects) (Object Detection)

Following the Car Ahead (Object Tracking)

Differentiating between Lanes/ Lanes and Sidewalk / Switching Lanes (Semantic Segmentation)

Identifying Specific Objects (Instance Segmentation)

Responding to events (Action Recognition)

Smart Summon (Road Edge Detection)

Depth Estimation


Evidently, this is an extremely multitasked setting, where there is a need to know a lot about the scene at once. That is why the tech stack is designed in such a way that there are multiple outputs for a given input sequence of images. The way it is implemented is that for a set of similar tasks, there is a shared backbone, with a set of tasks, at the end, all of which give a specific output.


Some tasks require features from specific camera feeds to make a prediction, so each camera has its own HydraNet trained for camera-specific tasks. But there are more complicated tasks like steering the wheel, depth estimation, or estimating road layout, which might need information from multiple cameras, and therefore, features from multiple HydraNets at the same time to make a prediction. Many of these complicated tasks can be recurrent, adding another layer of complexity to the network.


Summing it up, Tesla’s Network consists of 8 HydraNets (for 8 cameras), each responsible for specific tasks. In addition to that, the features from these HydraNets go into another run of processing which requires camera interactions with each other, and spread over time, to derive meaningful insights and is responsible for more complex tasks.

According to Tesla, there are nearly a hundred such tasks. This modular approach has many benefits for Tesla’s specific use case:

It allows the network to be specifically trained for specific tasks. The network is subsampled for that specific task and is then trained for it.

It drastically reduces the overall number of trainable parameters, thus amortizing the process.

It allows certain tasks to be run in shadow mode while the overall system performs as usual.

It allows for quicker improvements to the overall network, as updates can be installed in parts rather than overall.



What Tesla has done well, and many other efforts at autonomous driving failed to achieve is data generation. By giving more and more products in the hands of consumers, Tesla now has a large source of quality data. They are able to capture disagreements between the Human and the Autopilot by deploying models in live mode as well as shadow mode. In this way, they have been able to improve their models by inference capabilities on real-world data, capturing disagreements and mistakes made by both, the Human and the Autopilot. As long as they receive well-labeled data, their models keep on improving with minimal effort.

Medical Imaging


Arterys is one of such leading players, reducing subjectivity and variability in medical diagnosis. They have used Computer Vision to reduce the downtime to image blood flow in the heart, which took hours initially, to minutes. This allowed cardiologists to not only visualize, but also quantify blood flow in the heart and cardiovascular vessels, thus improving the medical assessment from an educated guess to directed treatment. It allowed cardiologists as well as AI to diagnose heart diseases and defects within minutes of MRI.

But why did it take hours for scans to generate flows in the first place? Let’s break this down.

Multiple in vivo scans are done to capture 3D Volume cycles over various cardiac phases and breathing cycles.

Iterative Reconstruction Methods on MRI data to evaluate flow increases reconstruction times automatically.


Along with 4D flow generation, object detection algorithms (Fast R-CNN(2023), R-FCN(2023)) in Arterys’ tech stack help to identify unidentifiable abnormalities and contours in the heart, lungs, brain, and chest scans. It automatically indexes and measures the size of the lesions in 2D and 3D space. Image Classification Networks help to identify pathologies like fracture, dislocation, and pulmonary opacity. Arterys trained its CardioAI network, which can process CT and MRI scans, on NVIDIA TITAN X GPUs running locally and on Tesla GPU accelerators running in Google Cloud Platform. Both were supported by the Keras and TensorFlow deep learning libraries. Inference occurs on Tesla GPUs running in the Amazon cloud.


Though these insights are very important for the medical professional, their availability to the medical professional can cause bias in the medical professional’s assessment of the case. Arterys mitigates this problem by flagging certain cases for attention but not specifying the exact location of the abnormality in the scan. These can be accessed once the specialist has made an unbiased assessment of the case.

Cloud-based deployment of its stack has allowed Arterys to provide reconstructions as well as invaluable visual and quantifiable analysis to its customers on a zero-footprint web-based portal in real-time. Computer Vision’s biggest impact in the coming years will be its ability to augment and speed the workflow for the small number of radiologists compared to the quickly growing elder patient populations worldwide. The high-value applications are in rural and medically underdeveloped areas where physicians or specialists are hard to come by.

Visual Search

Visual Search is a search based on images rather than text. It heavily depends on computer vision algorithms to detect features that are difficult to put into words or need cumbersome filters. Many online marketplaces, as well as search tools, have been quick to adopt this technology and consumer feedback for the same has been strongly positive. Forbes has forecasted that early adopters of visual search are projected to increase their digital revenue by 30%. Let us talk about a few early adopters and even late adopters to the visual search technology and how they have gone about implementing it.

Pinterest’s Visual Lens added a unique value to their customers’ experience wherein they could search for something difficult to put in words. Essentially, Pinterest is a giant Bipartite Graph. On one side are objects, which are pinned by the users, and on the other side are boards, where mutually coherent objects are present. An edge represents pinning an object (with its link on the internet) on a board that contains similar objects. This structure is the basis of Pinterest’s data and this is how Lens can provide high-quality references of the object in similar as well as richly varied contexts. As an example, if Lens detects Apple as an object, it can recommend Apple Pie Recipe and Apple farming techniques to the user, which belong to very separate domains. To implement this functionality, Pinterest has separated Lens’ architecture into two separate components.


In the first component, a query understanding layer has been implemented. Here, certain visual features are generated like lighting conditions and image quality. Basic object detection and colour features are also implemented as a part of the query understanding layer. Image Classification algorithms are also used to generate annotations and categories for the queried images.


In the second component, results from many models are blended to generate a continuous feed of results relevant to the queried image. Visual search is one of these models which returns visually similar results where the object and its context are strongly maintained. Another one would be an object search model which gives results that have the given object in the results. The third model uses the generated categories and annotations from the query understanding layer to do a textual image search to get the results. The blender does a very good job of dynamically changing the blending ratios as the user scrolls through the search results. Confidence thresholds are also implemented such that low confidence results from the query understanding layer are skipped while generating the final search results. Evidently, the basic technology supporting Pinterest Lens is object detection. It supports Visual Search, Object Search, and Image Search. Let’s understand in detail how object detection is done at Pinterest.



In step 1, the input image is fed into an already trained CNN, which identifies regions that might contain objects, and converts the image into a feature map. Once the feature map is generated, in step 2, A Region proposal network is used to extract sub-mappings of various sizes and aspect ratios from the original feature map. These sub-mappings are fed into a binary softmax classifier which predicts whether a given sub-region contains an object. If a promising object is found, it is indexed into a list of possible objects, along with the region bounded sub-mappings, which are then classified into object categories by an object detection network.


This is how Lens has been able to use Computer Vision and Pinterest’s bipartite graph structure to generate highly relevant and diverse results for visual search.


Eye-tracking is a technology that makes it possible for a computer system to know where a person is looking. An eye-tracking system can detect and quantify the presence, attention, and focus of the user. Eye-tracking systems were primarily developed for gaming analysis to quantify the performance of top gamers; but since then, these systems have found utility in various devices like consumer and business computers.


Tobii is the world leader in eye-tracking tech and the applications they support have moved from gaming to gesture control and VR. Data acquisition by Tobii is done via a custom-designed sensor which is pre-set on the device where eye-tracking information is needed. The system consists of projectors and customised image sensors as well as custom pre-processing with embedded algorithms. Projectors are used to create an infrared light-map on the eyeballs. The camera takes high-resolution images of the eyeballs to capture the movement pattern. Computer Vision algorithms are then used to map the movement of eyeballs from the images onto a point on the screen, thus generating the final gaze point. The stream of temporal data thus obtained is used to determine the attention and focus of the subject.


Workplace Safety

The human and economic cost of workplace injuries around the world is a staggering $250 billion per year. With AI-enabled intelligent hazard detection systems, workplaces prone to a high level of onsite injuries are realizing the decrease in number as well as the severity of injuries. Imagine a resource that works 24/7 without fatigue and keeps a watchful eye on whether safety regulations are being followed in the workplace!


Intenseye is an AI-powered employee health and safety software platform that helps the world’s largest enterprises to scale employee health and safety across their facility footprints. With real-time 24/7 monitoring of safety procedures, they can detect unsafe practices in the workplace, flag them and generate employee level safety scores along with live safety-norm violation notifications. Assistance is also provided in operationalising the response procedures, thus helping the employer in being compliant with the safety norms. Along with normal compliance procedures, they have also developed Covid-19 compliance features which help in tracking whether covid appropriate norms like masking and social distancing are being followed in the workplace.


The product is implemented on two levels. The basic driver for the product is Computer Vision. A big challenge for the org was to implement real-time predictions from live video streams. This is inherently a slow process and requires parallelisation and GPU computation to achieve due to the nature of the data pipeline. The vision systems employed range from anomaly detection to object as well as activity detection. Finally, the predictions generated are aggregated to rapidly create analysis, scores, and alerts on the suite available with the EHS professionals in the workplace who can ensure compliance from the workers.

Intenseye has developed general-purpose suites for workplaces, like PPE Detection, Area Controls, Vehicle Controls, Housekeeping, Behavioural Safety, and Pandemic Control measures. With their AI-based inspection system, Intenseye has been able to add a lot of value to the businesses they support. Along with saved lives, there have been decreased costs in damages, a boost in productivity, improved morale, and gain in goodwill for their clients.

Retail Stores

In-aisle innovation is shifting how we perceive the future of retail, opening the possibilities of what can be done to shape customer experiences. Computer vision is posed to tackle many retail store pain points and can potentially transform both customer and employee experiences.

Amazon opened the doors on its first AmazonGo store in Seattle in January 2023 after a year of testing its Just Walk Out technology on its employees at its headquarters. This concept creates an active shopping session, links the shopping activity to the amazon app, and allows the customer to have a truly hassle-free experience. It eliminates the need to group with other buyers at checkout points to make the purchase, thus creating a unique value proposition in the current pandemic stricken world.


How does Amazon do it? The process which is primarily driven by Computer Vision can be divided into a few parts:


Data Acquisition: Along with an array of application-specific sensors (pressure, weight, RFID), the majority of data is visual data extracted from several cameras. Visual data ranges from images to videos. Other data is also available to the algorithm, like amazon user history, customer metadata, etc.

Data Processing: Computer Vision algorithms are used to perform a wide array of tasks that capture the customer’s activity and add events to the current shopping session in real-time. These tasks include activity detection (eg. article picked up by customer), object detection (number of articles present in cart), image classification (eg. customer has a given product in the cart). Along with tracking customer activity, visual data is used to assist the staff in other store-specific operations like inventory management (object detection), store layout optimisation(customer heat maps), etc.

Charging the customer: As soon as the customer has made their purchase and moves to the store’s transition area, a virtual bill is generated on the items present in the virtual cart for the customer’s current shopping session. Once the system detects that the customer has left the store, their Amazon account is charged with that purchase.


AmazonGo is a truly revolutionary use of computer vision and AI. It solves a very inherent problem for the in-store retail customers, assisting staff, improving overall productivity, and generating useful insights from data in the process. Though, it still needs to make economic sense in the post-pandemic world. There are privacy concerns that also need to be addressed before this form of retail shopping is adopted by the larger world.

Industrial Quality Control

The ability of Computer Vision to distinguish between different characteristics of products makes it a useful tool for object classification and quality evaluation. Vision applications can sort and grade materials by different features, such as size, shape, colour, and texture so that the losses incurred during harvesting, production, and marketing can be minimized.

The involvement of humans introduces a lot of subjectivity, fatigue, delay, and irregularity in the quality control process for a modern-day manufacturing line/sorting line. Machines are able to sort unacceptable items better than humans 24/7 and with high consistency. The only requirement is a robust computer vision system. Vision systems are being implemented for the same on a large scale to push products in the refurbished market or improve on manufacturing shortcomings.


Let’s solve the problem of detecting cracks in smartphone displays. The system needs a few important things to function as intended.

Data: Quality data is imperative for the success of any machine learning system. Here, we require a camera system that captures the screen at various angles in different lighting conditions.

Labels: We might want to eliminate human shortcomings in the process, but quality labels generated by humans under normal working conditions are crucial for the success of a vision system. For a robust, large-scale process, it is best to employ multiple professional labellers and let them agree on different labelling results, thus eliminating subjectivity from the process.

Modelling: Models must be designed and deployed keeping in mind the throughput required for a given sorting line. Simple classification/object detection models are enough for detecting cracks. The main focus should be on prediction time which will be different for different sorting lines.

Inference and Monitoring: Models can be initially deployed in shadow mode where their performance is evaluated against human workers on live data. They can be deployed if the performance is acceptable, otherwise, another modelling iteration can be adopted. Data Drift must be monitored manually / automatically along with model performance when data drift is high. Retraining should be done when results are not acceptable.

Many manufacturing lines have implemented automated control systems for screen quality, be it televisions, phones, tablets, or other smart devices. Companies like Intel are also providing services to develop such systems which provide great value to many businesses.


Another quality control application using vision systems has been launched by food technology specialist, OAL, who have developed a vision system, April Eye, for automatic date code verification. The CV-based system achieves full automation of the date-verification process, removing the need for a human operator. They have reduced the risk of product recalls and emergency product withdrawals (EPWs) which are majorly caused by human error on packaging lines. The product is aimed at solving mistakes that cost food manufacturers £60-80 million a year. The system has managed to increase throughput substantially to 300 correct packs a minute. The precision of the system is also highly acceptable. A neural network trained on acceptable and unacceptable product date codes is used to generate predictions on live date codes in real-time. The system ensures that no incorrect labels can be released into the supply chain, thus protecting consumers, margins, and brands.

Computer Vision adds value to quality control processes in a number of ways. It adds automation to the process, thus making it productive, efficient, precise, and cost-effective.

With a deeper understanding of how scalable vision-based systems are implemented at leading product-based organisations, you are now perfectly equipped to disrupt yet another industry using computer vision. May the accuracy be with you.


April Eye

About me

Kshitij Gangwar

For immediate exchange of thoughts, please write to me at [email protected]

The media shown in this article are not owned by Analytics Vidhya and are used at the Author’s discretion.


How To Install A WordPress Test Site On Your Computer

One of the best ways to test a new website you’re developing is by installing a WordPress test site on your computer. Test it locally, make sure everything looks and works good, and then upload it to the live site all at once.

When it comes to WordPress, there are several things to consider when running locally. You’ll need a working WordPress installation, an available SQL database, and a local web server for everything to run on.

Table of Contents

You can set up all three on your local computer without too much effort, using the process outlined below.

Install a Local Web Server

The first thing you’ll need to run a local WordPress test site is a web server running on your local computer. Running a web server involves ensuring the right ports are running, PHP and Perl programming languages libraries are installed, and that the web server software can properly serve pages to your browser.

Similar to setting up an FTP server or a local Minecraft server, there are Windows applications available to run a local web server as well. One of the most popular of those is XAMPP.

To get started, just download and install the XAMPP software to your desktop or laptop PC.

1. Run the installer, make sure all components are enabled, and select Next to continue.

2. Choose a location for your web server. The best option is to choose the default folder at the root of the C: drive where permissions will be set properly. Select Next to continue.

3. Select your languages and select Next. Keep Bitnami enabled, which will help you with installing WordPress after installation. Select Next. Finally, select Next one more time to install XAMPP.

Installation will take about five minutes. Once finished, the XAMPP control panel will open. Close it for now.

Install WordPress on Your XAMPP Web Server

Once it launches, select Start to the right of Apache and MySQL to launch the web server and the SQL database needed for your WordPress test site to work properly. 

You can see the web server’s file structure by looking at the location where you’ve installed XAMPP. In this example, XAMPP is installed in C:XAMPP. This is where all of your web files will go that’ll be viewable from your web browser.

XAMPP comes with Bitnami, which lets you quickly install WordPress on top of your current XAMPP web server. 

1. Open a web browser and type localhost in the URL field. Press Enter. When the XAMPP dashboard comes up, scroll down to the bottom of the page where you’ll see the Bitnami section.

2. Select the WordPress icon at the bottom of the page. On the Bitnami site, scroll down to the WordPress section and select the Windows link to download WordPress.

4. On the next step, configure the Admin login, name, email address, and password that you want to use with your WordPress test site.

5. Select Next when you’re done, type a name for the WordPress test site and select Next. On the next page, you can configure email support so your test site can send notifications to your email. This is optional.

6. You can deselect Launch wordpress in the cloud with Bitnami since this will just be a local WordPress test site on your computer. Select Next to continue. Select Next again to initiate the installation. Once the installation is done, select Finish to launch the Bitnami WordPress module.

This will launch your default web browser with your new local WordPress test site loaded. The link will include your localhost IP address (your computer’s IP address), with /wordpress/ at the end, where your site is stored.

The path to these WordPress files is C:XAMPPappswordpresshtdocs

Now you’re ready to configure your WordPress test site and start using it.

Using Your WordPress Test Site

There are a few things you can do with this new local WordPress test site. 

Import a Copy of Your Live Site

You could export your actual online website and load it into this installation for testing.

To do this, you’ll need to backup your WordPress site and WordPress database. This will provide you with a zipped folder with all of the WordPress files, as well as a *.gz file which is the backup of your mySQL database.

You can copy the backed up WordPress files directly into your local WordPress folders. You can also import your mySQL *.gz database file into your local mySQL database using phpMyAdmin.

2. Select the Import tab, and select the Choose File button under File to import.

3. Browse to your backed up *.gz database file and phpMyAdmin will import all posts and WordPress settings into your test WordPress site.

Once you’re done and you reopen the local WordPress installation using the same link as above, you’ll see your original online site now running on your local computer.

Other Things You Can Do With a WordPress Test Site

In addition to running your live site on your local machine, there are a lot of other useful things you can do with your local WordPress test site.

Install and test any WordPress theme

Test making code changes to your WordPress site

Install and test WordPress plugin configurations

Play around with WordPress configurations to see how it changes your site

Update the detailed information about How To Find Out If Someone Was Snooping Around On Your Computer on the website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!