Trending March 2024 # How The Public Cloud Drives Emerging Technologies # Suggested April 2024 # Top 9 Popular

You are reading the article How The Public Cloud Drives Emerging Technologies updated in March 2024 on the website Hatcungthantuong.com. We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested April 2024 How The Public Cloud Drives Emerging Technologies

Cloud computing is, to be sure, today’s leading driver of emerging chúng tôi next big battles among the public cloud vendors will center around emerging technologies like artificial intelligence, the Internet of Things, virtual reality and blockchain.

Protecting your company’s data is critical. Cloud storage with automated backup is scalable, flexible and provides peace of mind. Cobalt Iron’s enterprise-grade backup and recovery solution is known for its hands-free automation and reliability, at a lower cost. Cloud backup that just works.

SCHEDULE FREE CONSULT/DEMO

By now, most enterprise leaders understand the value of cloud computing. They have experimented with software as a service (SaaS), infrastructure as a service (IaaS) and platform as a service (PaaS), and now they are looking to expand their use of the cloud. Synergy Research recently found that the public cloud market is growing 40 percent per year. And IDC has predicted that by 2023, 67 percent of IT infrastructure and software will be based in the cloud.

But as organizations expand their use of the public cloud, they are asking for more than just the traditional SaaS, IaaS and PaaS offerings. They are looking to cloud vendors to help them keep up with the ever-increasing pace of technological change. As Gartner’s Daryl Plummer explained in a blog post, “Technology-based innovation is arriving faster than most organizations can keep up with. Before one innovation is implemented, two others arrive.”

And cloud vendors are rushing to fill that need with new services related to artificial intelligence (AI), the Internet of Things (IoT), virtual reality (VR) and blockchain.

In many ways, cloud computing and AI seem like a perfect match – cloud, clearly enables AI.

These requirements line up almost perfectly with the benefits offered by the cloud. Public cloud services allow organizations to access these high-end computing resources but pay only for the amount of time that they need them.

Sensing an opportunity, all of the major cloud vendors are investing heavily in AI and machine learning research. They have all rolled out AI services related to analytics, natural language processing and image recognition, and they continue to expand their capabilities. In the coming year, look for this cloud AI trend to accelerate.

In a 2023 report, Canalys Research said that in the past, public cloud growth “was driven by demand for primary cloud infrastructure services, such as on-demand computing and storage, across all customer segments and industries. But future growth will be fueled by customers using the artificial intelligence (AI) platforms cloud service providers are building to develop new applications, processes, services and user experiences.”

Many of those cloud-based AI applications will be analyzing data from another area of emerging technology — IoT. According to IDC, “By 2023. . . 100 percent of IoT initiatives will be supported by AI capabilities,” and “by 2023, IoT technology will be in 95 percent of electronics for new product designs.”

As organizations deploy fleets of IoT sensors and devices, they need analytics tools to help them make sense of that data.

They will likely transmit at least some of that data to the cloud for analysis. And as with AI, the leading public cloud vendors have launched IoT-related services to help with that process.

However, a competing and complementary technology — edge computing —will likely handle some of the burden. In fact, Gartner’s Tom Bittman has said, “The edge will eat the cloud.”

Edge computing involves doing data processing and analytics either on or very close to the devices where it is generated. As edge computing evolves, devices at the edge of the network will become smarter and faster, and they will handle at least some of the computing burden that might otherwise occur in the cloud. According to Bittman, “The cloud will have its role, but the edge is coming, and it’s going to be big.”

That tension between cloud computing and edge computing also exists in virtual reality (VR) and the related fields of augmented reality (AR) and mixed reality (MR). According to IDC, 30 percent of consumer-facing G2000 companies are already experimenting with these technologies this year, and next year AR mobile apps will likely have more than 400 million users.

Like AI, the new VR, AR and MR technologies require massive computing power, particularly graphics processing capabilities. But unlike AI applications, these immersive reality applications need lots of processing power over an extended period of time, and latency caused by waiting for data to be sent from the cloud to the edge device can make the experience less enjoyable.

Most of the leading public vendors don’t have specific services targeted at the VR market today. However, their existing cloud development and infrastructure services easily serve as a platform for VR/AR/MR applications. For example, Oracle Cloud has demonstrated an industrial VR use case that uses its cloud computing and IoT services.

As VR and AR applications become more popular, this will likely become another area of intense competition for public cloud vendors.

Consumers might not be as familiar with blockchain as with the other emerging technologies like AI, IoT and VR, but the enterprise is taking notice. According to Gartner, “By the end of 2023, the banking industry will derive $1 billion in business value from the use of blockchain-based cryptocurrencies.”

Blockchain is the secure distributed ledger technology that enables cryptocurrencies like Bitcoin, but it has many uses beyond digital currency. For example, it could manage digital copyrights, enable financial transactions without an intermediary like a bank, track supply chain information, record health histories or perhaps even slow the spread of “fake news.”

Most of these uses are still a long way off, but the cloud vendors are beginning to offer blockchain services. Microsoft and IBM both have cloud blockchain services that enterprises can begin experimenting with today. Amazon doesn’t have a specific blockchain service of its own right now, but it is working with partners who offer blockchain services through its Marketplace. And Google is also reportedly experimenting with blockchain, although it also does not yet offer a public blockchain service.

This emerging technology probably won’t hit the mainstream as soon as the other emerging technologies discussed here. But it’s clear that this too will be a key battleground for public cloud vendors.

Cloud computing has not only cemented its position as an important provider of infrastructure and software services to enterprises, it also looks likely to expand its role in enterprise IT as organizations investigate emerging technologies.

IDC’s most recent data on the public cloud shows that the market is growing even faster than analysts had anticipated. In the first half of 2023 alone, cloud computing revenues climbed 28.6 percent to hit $63.2 billion. Frank Gens, senior vice president and chief analyst at IDC, stated, “Public cloud adoption is accelerating in large part as enterprises recognize that the cloud has become the launchpad for virtually every new IT innovation in the last 24 months – including AI, blockchain, quantum computing and more. Organizations not on the public cloud will be increasingly isolated from the world of tech innovation.”

Because they offer enterprises a way to experiment with new technology without making a large initial investment, public cloud vendors could help to increase the rate at which organizations adopt new technologies. And that, in turn, could spur more investment in cloud computing as enterprises seek to keep up with the competition.

You're reading How The Public Cloud Drives Emerging Technologies

How Emerging Economies And Different Industries Are Getting Benefit From Cloud Computing?

Cloud Computing will increase creativity and enhance service provision in the public and private sectors of developing countries. Gaining access to data and processing resources on demand that can be scaled up to improve efficiency. Cloud computing is effective because it can be accessed by anybody, anywhere in the world, and with access to the internet, such as governments in underdeveloped countries, which often have limited resources, can benefit greatly from cloud computing.

Automotive

With the automotive cloud, car companies can store their inventory and other data in one place that is easy to get to. The automotive industry depends on data being available all the time. Even if you don’t have the perfect car in stock, your clients will be pleased when you can search inventory and send them to a store that does.

Scalability and Flexibility

Using the cloud gives your business more freedom. You can quickly add more resources and storage to meet business needs without having to buy new hardware. Companies don’t have to pay for or build the infrastructure needed to handle their highest load levels.

Increasing Safety

Cloud technology has made it possible for third-party developers to create a security measure that everyone can use. It solves most of the problems that often come up when digital solutions are being made.

Development of new goods and services

Some developing countries may be better able to adapt to digital economies than developed countries. The emerging countries don’t have to deal with old systems. They are free to try new things and come up with new business models and processes.

Extending Reach

Developing countries can make it easier for their users in faraway places to use them. Due to the number of players in the Cloud market and the services they offer, companies in developing countries can now hire workers from all over the world. This has led to higher productivity, and the spread of knowledge that can help everyone in the community get better at what they do, which speeds up their development.

Cloud computing can help three industries the most

The Medical Field

The medical field can use cloud computing services to get everything they need to make their jobs easier. The data can be stored in one place, and experts can use remote access to look at it and figure out what the best course of action is. Some of the problems that have been plaguing this industry for decades include the high risk of human error, the way medicine is practiced, and how hard it is to keep patient information private. Since a long time ago, people in the medical care field have been looking for better ways to organize themselves. But cloud computing could be the answer that doctors and medical students have been looking for. Also, doctors and nurses won’t have to make multiple copies of sensitive documents and information, which means they won’t have to worry about losing important information.

Education

Cloud-based online learning is a very important part of improving higher education in developing economies. Cloud Computing is used a lot in the Education field. One big benefit of this is that learning materials are easy to get to, even in the most remote parts of the world. Education in areas including Shanghai & South Africa can benefit from this use of AWS Cloud.

Manufacturing

First of all, manufacturers can use cloud computing to make it easier than ever to improve their production capabilities. Cloud computing services also have a lot to offer businesses that make things. So, manufacturers can ensure that their production capabilities are at their best and that they aren’t wasting time or money on tasks that aren’t necessary. By using tools like GPS tracking, manufacturers can get real-time information about how their factories work. Also, these tools let manufacturers make real-time changes and adjustments based on what they see, so they don’t have to wait until the end of the month or even the year to change something on their production line. Also, HPC services often let you make a digital copy of any product so that you can test it virtually.

Conclusion

However, companies and governments in developing countries must be cognizant of the shortcomings in their nascent institutions and physical infrastructure. You need to be aware of the risks associated with using the cloud to make informed decisions about using its benefits. Significant opportunities exist for developing countries to enhance public service and stimulate sustainable socio-economic development through the use of cloud computing. To ensure a smooth and secure transition to cloud computing, national governments in developing nations must create favorable conditions.

The Evolution Of Solid State Drives (Ssds)

When solid state storage were invented over half a century ago and then made widely commercially available, their effect was transformative — the technology has played a major role in the evolution of storage, gaming, business and computing. But by examining SSDs, you can also understand what the future will hold for their components, benefits and applications.

What is SSD storage?

Solid state drive (SSD) storage uses non-volatile solid state chips that feature flash memory cells to store data on a long-term basis. Unlike traditional hard disk drives (HDDs), which use magnetic platters spinning at high speeds to using an actuator arm reminiscent of a record player, SSDs require no moving parts. Instead, the storage solution depends entirely on flash memory to store data, making them much faster at reading and writing data, both ad hoc and in sustained operations.

Using a mesh of electrical cells in a NAND — a type of non-volatile flash memory — to store data, SSDs include an embedded processor known as the controller. It runs firmware-level code to help the drive operate and bridge the media to the host computer via the interface bus. Today’s SSDs don’t require an additional power source that maintains an electrical current into the device at all times to preserve the data. This makes them increasingly more reliable than traditional HDDs (from a mechanical and data integrity standpoint).

SSDs also have built-in technology that further improves read/write speeds, making them faster than traditional HDDs. Historically, HDDs included a bit of memory within the drive hardware itself (typically eight or 16 MBs) to increase the perceived read/write performance. If the data a user wants to read or write can be stored within the high-performing cache memory, the drive temporarily stores the data in the fast memory modules. It then reports back to the operating system once this is complete, triggering the drive to transfer the data from the cache to the much slower magnetic media. This doesn’t always work, as only a small portion of the drive’s total data is cached at any time, and if data isn’t in the cache, it has to be read from the slower physical medium.

SSDs utilize the same kind of concept involving a cache, except they include dynamic random access memory (DRAM) chips — a type of semiconductor memory commonly used in PCs and servers — within the controller hardware on the SSD itself. Ranging from 64 MBs all the way up to GBs, they buffer requests to improve the life of the drive and serve short bursts of read/write requests faster than the regular drive memory allows. These caches are essential in enterprise storage applications, including heavily used file servers and database servers.

When were SSDs first available?

The use of flash memory for longer-term storage has been around since the 1950s, but those solutions were generally in mainframes or larger minicomputers. They also required battery backups to preserve the contents of the memory when the machine was not powered by the host, as those solutions used volatile memory.

Prepare for your storage upgrade

White Paper

Which form factors and interfaces make the most sense for your company’s storage needs? Download Now

Since then, the technology has gotten smaller and faster, and it no longer requires battery backup. Performance has skyrocketed too, as new PC bus interfaces have made it possible for data transfer rates to far exceed the standard rates that traditional spinning media would saturate. They’re also less expensive today, even compared to the first SSD drive released in 1991 — a 20MB SSD that sold for $1,000.

Applications for SSDs

There are multiple benefits to using SSDs for production storage applications. Because SSDs have no moving mechanical components, they use less power, are more resistant to drops or rough handling, operate almost silently, and read quickly with less latency. Additionally, since there are no spinning platters or actuator arms, there is no need to wait for the physical parts to ramp up to operating speed. This feature eliminates a performance hit that hard drives cannot escape. SSDs are also lightweight, which makes them ideal for laptops, small form factor machines and high-capacity storage area networks in a smaller footprint.

To host both the database engine and the database itself for quick access.

As a “hot” tier in a stratified network storage archive, where frequently accessed data can be retrieved and rewritten very quickly.

In situations where physical shocks are a possibility and HDDs would present an untenable risk to system reliability.

In gaming, where the user is often moving through new environments.

In business settings where you need your operating system and applications to load quickly.

How to choose the right SSD for your needs

PCIe SSDs interface with a system via its PCIe slot — the same slot that is used for high-speed video cards, memory and chips. PCIe 1.0 launched in 2003, with a transfer rate of 2.5 gigatransfer per second (GT/s) and a total bandwidth of 8 Gbps. GT/s measures the number of bits per second that the bus can move or transfer.

Several years later, PCIe 2.0 was introduced, doubling both the bandwidth and the gigatransfer speed, hitting 16 Gbps and 5 GT/s, respectively. Subsequent generations doubled bandwidth and gigatransfer speeds with each new iteration. PCIe 3.0, for instance, features 32Gbps bandwidth and 8 GT/s.

Most recently, SSDs started using the PCIe 4.0 specification, which features bandwidth of 64 Gbps and a 16 GT/s rate. PCIe is now being paired with the non-volatile memory host controller interface specification (NVMe), a communications protocol for high-speed storage systems that runs on top of PCIe.

However, not everyone has a PCIe-enabled system, and some may have PCIe slots in conjunction with other system add-ons, like memory or graphics cards. In these cases, other SSDs like the Samsung 870 EVO are an ideal option for content creators, IT professionals and everyday users. An 870 EVO uses the standard SATA interface to achieve the maximum SATA interface limit of 560/530 MB/s sequential speeds. Samsung 870 QVO also achieves the maximum SATA interface limit, with offerings in the 1, 2, 4, and 8 TB 2.5-inch SATA form factor configurations.

What does the future hold?

In the short term, capacities will continue to ramp up, while the cost per GB for SSDs will continue to decrease. New form factors that increase the number of parallel data transmission lanes between storage and the host bus will emerge to increase the speed and quality of the NAND storage medium.

The physical layer of cells that holds the blocks and pages will improve, offering better reliability and performance. Form factor will also continue to shrink. In 2023, Samsung announced it had reduced cell volume by up to 35%, making its 176-layer 7th-generation V-NAND SSD offering similar in height to its previous generation.

Learn more about how to improve your storage planning and evaluation processes with this free guide.

Innovation Unleashed: The Hottest Nlp Technologies Of 2023

Introduction Improving Text Representation

Accurate representation of text is necessary as it allows the machine to understand the meaning and intent of the text and allows us to perform various tasks such as text classification, language translation, and text generation.

As we know to input textual data into the NLP models, we need to convert that textual data to their embeddings. And the results of these models depend on these embeddings only.

Data2Vec 2.0

Data2Vec2.0 is an updated release for the model Data2vec. Data2vec is a self-supervised learning algorithm, meaning it can learn from vision, text, and speech without needing explicit labels. Self-supervised learning algorithms learn by using the inherent structure of the data itself.

Data2Vec2.0 has shown tremendous results for tasks like text understanding image segmentation and speech translation task.

Similar to the original data2vec algorithm, data2vec 2.0 predicts contextualized representations of the data, meaning they take the entire training data into account.

Data2Vec2.0 is an improved version then all its predecessors as it is way faster than any other model and does not compromise accuracy.

For speech, the test was done on the LibriSpeech speech recognition benchmark, where it performed more than 11 times faster than wav2vec 2.0 with similar accuracy. For natural language processing (NLP), evaluation was done on the General Language Understanding Evaluation (GLUE) benchmark, which achieved the same accuracy as RoBERTa and BERT.

The architecture of Data2Vec 2.0

Source

To know more about the topic, refer to this link

New and Improved Embedding Model

Text-embedding-ada-002 was recently launched by openAI. It has outperformed all the previous embedding models launched by openAI.

Text-embedding-ada-002 is trained using a supervised learning approach, which means that it is trained on a labeled dataset that consists of text input and corresponding targets.

The model uses a transformer-based architecture designed to process sequential data such as text. The transformer architecture allows the model to effectively capture the relationships between words and phrases in the text and generate embeddings that accurately reflect the meaning of the input.

The new model, text-embedding-ada-002, replaces five separate models for text search, text similarity, and code search and is priced way lower than all the previous models.

The context length of the new model is increased, which makes it more convenient to work with large documents, while the embedding size of the new model is decreased, making it more cost-effective.

Image and Video Generation Imagen

Imagen, developed by Google and launched in 2023, is a text-to-image diffusion model. It takes in a description of an image and produces realistic images.

Diffusion models are generative models that produce high-resolution images. These models work in two steps. In the first step, some random gaussian noises are added to the image and then in the second step, the model learns to reverse the process by removing the noise, thereby generating new data.

Imagen encodes the text into encodings and then uses the diffusion model to generate an image. A series of diffusion models are used to produce high-resolution images.

It is a really interesting technology as you can visualize your creative thinking just by describing an image and generating whatever you want in moments.

Now let me show you guys the output image I got using a certain text

Text: A marble statue of a Koala DJ in front of a marble statue of a turntable. The Koala wears large marble headphones.

Output Image:

Output Image of a Koala DJ by Imagen

Source

I know that was something really fascinating, Right!!. To know more about the model, refer to this link

DreamFusion

DreamFusion, developed by Google in 2023, can generate 3D objects based on text input.

The 3D objects created are of high quality and are exportable. They can be further processed in common 3D tools.

Video of some 3D images produced by DreamFusion

Source

The 3D model created is based on 2D images from the generative image model Imagen so you also don’t need any 3D training data for the model.

Interesting, Right!!, Now go and refer to this link to learn more about the model.

DALL-E2

DALL-E2 is an AI system developed by OpenAI and launched in 2023 that can create realistic images and art based on textual descriptions.

We have already seen the same technologies, but this system is too worth exploring and spending some time. I found DALL-E2 as one of the best models present, which works on image generation.

It uses a GPT-3 modified to generate images and is trained on millions of images from over the internet.

DALL-E uses NLP techniques to understand the meaning of the input text and computer vision techniques to generate the image. It is trained on a large dataset of images and their associated textual descriptions, which allows it to learn the relationships between words and visual features. DALL-E can generate coherent images with the input text by learning these relationships.

Let me show you how DALL-E2 works

Input text – Teddy bears

Output Image-

Image of Teddy bears produced by DALL-E2

Source

Here is the link to the research paper if you are interested to read in detail here.

Conversational Agents

Here are some top Conversational models launched in 2023

LaMDA: Towards Safe, Grounded, and High-Quality Dialog Models for Everything

LaMDA (Language Model for Dialogue and Answering), developed by Google, is a language model designed for answering and dialog tasks.

This model can be used in various ways, such as chatbots, customer service, Virtual Assistants, etc.

One of the key features of LaMDA is its ability to generate coherent responses grounded in the input text. This is achieved through the use of a transformer-based language model that is trained on a large dataset of human conversations. The model is able to understand the context of the conversation and generate appropriate responses based on the content of the input text.

LaMDA can generate high-quality responses on a wide variety of topics and open-ended questions.

The developers have also kept in mind the sanity of responses generated by the model, and it avoids generating offensive and biased content.

I’m sure you guys would want to see a demo of this amazing bot. So here it is!

Conversation with LaMDA

Source

For in-depth knowledge, refer to the link here

ChatGPT

ChatGPT, developed by OpenAI, was recently released in late November and is one most trending and viral AI product launched in 2023. Almost all data professionals are trying and researching this amazing chatbot.

ChatGPT is based on the GPT-3 (Generative Pre-trained Transformer 3) language model, a large, transformer-based language model trained on a massive dataset of human-generated text.

ChatGPT can generate coherent responses and can, understand the context of the conversation, and generate appropriate responses based on the content of the input text.

It is designed to carry conversations with people. Some of its features include answering follow-up questions for various topics.

The accuracy and the quality of the responses generated by the model are incomparable to any other chatbot.

Here is the demo of how ChatGPT works

Conversation by chatGPT

Refer to this link to learn more about the model here

Automatic Speech Recognition Whisper

Whisper, developed by OpenAI, is a technology that helps in the conversion of Speech to text.

It has multiple uses like Virtual assistants, voice recognition software, etc. Moreover, it enables transcription in multiple languages and translation from those languages into English.

Whisper is trained on 680,000 hours of multilingual and multitask data collected from the web. The use of a large and diverse dataset has led to increased accuracy of the model.

Whisper uses encoder-decoder architecture in which the input audio is split into chunks of 30 seconds, converted into a log-Mel spectrogram, and then passed into an encoder. A decoder is trained to predict the corresponding text caption.

Whisper can be trained on large datasets of speech and transcription pairs to improve its accuracy and adapt to different accents, languages, and speaking styles.

The architecture of Whisper

Source

Transfer Learning in NLP

Transfer learning is a go-to approach for building high-performance models. In transfer learning, the model is trained on large and general datasets and is fine-tuned for our related task. It has been widely used in natural language processing (NLP) to improve models’ performance on almost each and every task. There has been significant research in 2023 around improving the transfer learning techniques. We will discuss the top 2 breakthroughs in this area now.

Zero-Shot Text Classification with Self-Training

As a result of recent developments in big pre-trained language models, the importance of zero-shot text categorization has increased.

Particularly, zero-shot classifiers developed using natural language inference datasets have gained popularity due to their promising outcomes and ready availability.

You can read more about this approach in this conference paper.

Improving In-Context Few-Shot Learning via Self-Supervised Training

In-context few-shot learning refers to learning a new task using only a few examples within the context of a larger, related task. One way to improve the performance of in-context few-shot learning is through the use of self-supervised training.

Self-supervised learning involves training a model on a task using only input data and without explicit human-provided labels. The goal is to learn meaningful representations of the input data that can be used for downstream tasks.

In the context of in-context few-shot learning, self-supervised training can be used to pre-train a model on a related task, such as image classification or language translation, to learn useful data representations. This pre-trained model can then be fine-tuned on the few-shot learning task using only a few labeled examples.

Read in detail about the approach in this paper.

Conclusion

Related

Chicago Public Libraries Go Wireless

After carefully rolling it out over the course of two months, the City of Chicago officially unveiled its free wireless network in the city’s libraries today. Now any Chicago resident with a valid Public Library card can access all of the city’s digital library resources wirelessly while on-site at any of the 79 branches.

The city chose San Jose, California-based Airespace to handle the deployment of its centrally managed 802.11b WLAN. The network utilizes Airespace 4000 WLAN Switches and Airespace 1200 access points. All 79 sites are centrally managed from the library’s IT headquarters at the Harold Washington Library Center using Airespace Control System (ACS) software. Airespace, which created a similar but smaller network for the Seattle public library, is also working with the city of Chicago on a more extensive city-wide metropolitan area network (MAN), the full details of which have not yet been announced.

The Chicago Public Library’s hotspot network is good news for all patrons, both with and without wireless-enabled devices. Wi-Fi users can now employ their own laptops to tap into the card catalog and all of the library’s research databases without having to wait in line for an available computer terminal, which in turn frees up more space for patrons without their own wireless devices.

“Our users are delighted with it,” says Mary Dempsey, commissioner of the Chicago Public Library System. “Our staff is delighted, too. For those who live in Chicago and have a wireless laptop, it’s a great resource—and this network frees up that many more land-based computers to the public. It doubles our reach in that respect.”

The network, which partially utilized existing Ethernet network infrastructure, cost only $81,000 to deploy and is only expected to cost the city an additional $14,000 per year to maintain. It was created, says Dempsey, in response to user demand and as a result of Chicago mayor Richard Daley’s drive to bring more library resources to his public.

“We have a dynamic mayor who is an enormous supporter of the library. He believes in moving the city forward using technology. The digital divide is critical for us to bridge if we’re going to have a good quality of life and an educated work force,” says Dempsey.

Anticipated yearly maintenance costs have been kept down primarily because of the ACS software, which allows the library’s few IT staff members to manage the entire system from one central location, and to create and provision consistent security policies across the network. Airespace’s software also provides accurate location tracking which enables rapid fault isolation and problem resolution for each site.

“Management is by far the biggest asset Airespace brought to this project,” says Jeff Aaron, senior manager of Product Marketing at Airespace. “We provide the ability to scale a large network and have it centrally managed and have all the tools in place to easily troubleshoot. With a large-scale network, that is very important. Along with that, the ability to handle load through RF management is an asset. Our software can detect changes in load and dynamically adjust itself making it easy to do balancing across access points. It has to run flawlessly with 10 or 150 users on the network at any given site. RF management enabled this to happen automatically.”

In addition to free, public Wi-Fi, the Airespace WLAN system provides a secure channel for the city’s field workers, Chicago police, and public safety personnel to access.

“Within the library, a certain part of the network is dedicated to police and other city employees. It extends out into the parking lot so that they can pull in and get wireless access without having to leave their vehicles,” says Aaron.

In the coming months, expect to see more announcements from the city of Chicago as it deploys the next phases of its overall MAN plan.

Should Police Scanners Be Public?

The past week has seen a torrent of information, the majority inaccurate, gushing from the faucets of Twitter and Facebook and Reddit and cable news and tabloids and blog posts. The story has become not so much what happened as what didn’t happen; as BuzzFeed notes, the most valuable service a respectable publication can perform right now is not to be the first but to act as Virgil, guiding the public through the morass of information they already have.

In the midst of all this, one of the most difficult sources of information to parse has been one of the oldest: the police scanner. Until this morning, feeds from the Boston Police Department, broadcast over the web and through apps, were publicly available to anyone. Broadcastify, which calls itself “the radio communications industry’s largest platform for streaming live audio for public safety, aircraft, rail, and marine related communications,” had tens of thousands of listeners. Many of those listeners relayed the chatter they heard to Twitter or Reddit, if members of the public, or through news outlets, if members of the media.

Police scanners seem like reliable sources of information, a direct line to those who know more than anyone else about what’s going on on the ground. News reporters and organizations are posting direct quotes from scanners without any equivocation. You could almost see them thinking, “this stuff originates from the police themselves! It must be real!” Some of these channels, which are essentially just like any AM/FM station, are available to the public, or at least any member of the public with a computer (or, in the past, a $100 scanner). Those are mostly calls from dispatch, according to a detective from the Radnor, Pennsylvania police department who chatted with me about how scanners work. “You can hear police calls, fire calls, EMS calls, public works calls,” he said. (Radnor is the hometown of Sunil Tripathi, a Brown student who became a prime suspect in the minds of the public for a few hours last night.) Lindsay Blanton, CEO of the company that owns Broadcastify, confirmed that, saying “Our feed provider terms of service restrict the broadcast of any law enforcement communications that are not routine dispatch frequencies and talkgroups.”

The police doesn’t much care that these are available to the public. They’re not provided as a service to the public or out of any kind of desire to transparency; many police forces just don’t bother encrypting these radio feeds because they’re not seen as sensitive. This isn’t the only way they communicate; on-duty officers have secure, encrypted lines as well, the detective from Radnor tells me. It’s important to note that there’s no law requiring police dispatch lines to be public; in fact, many departments, like the Pasadena Police Department, have decided to encrypt all of their frequencies. Pasadena cites concern for victims, whose names and locations are often broadcast over the channel, as the reason for the change.

What you hear on the scanner is what the dispatcher or communications center hears: a call that something is happening that requires investigation, and conversation that comes from addressing that call. That doesn’t make it true, of course, nor does the dispatcher or any police officer make any claim to that effect. When somebody calls the police station and says they see a suspicious person lurking in an alley, what the public hears through the scanner is “possible suspicious person lurking in an alley.” If it turns out to be a chair with a coat on it, that’s no big deal for the police; they investigated and resolved the call. But if a member of the media hears that, and the call happens to take place in a city in which a recent bombing has killed three and injured hundreds, that chair with a coat can turn into a terrorist with one tweet.

Early this morning, the Boston Police Department tweeted this:

In response, Broadcastify shut down its scanner feeds, saying “MA State PD and Boston Police have requested via social media to not post search locations for the Boston bombing suspects – the Boston PD feed is temporarily offline due to this request.” This is an indirect request, and a respectful response from Broadcastify; the scanner feed isn’t “offline,” it’s merely harder to find, to try to tamp down the flow of misinformation. Lindsay Blanton, from Broadcastify, told me via email that “we did not receive any formal request – we’re just making the temporary decision for now in light of the extraordinary events.”

An academic paper from a doctoral student at the Indiana University School of Journalism examines the legality and ethics of tweeting information from police scanners more closely. Here’s the conclusion, with the important part emphasized by me:

Tweeting public safety radio traffic – while probably legal and often beneficial – should be done sparingly and under pre-set guidelines designed to minimize the spread of flawed information and avoid compromising the safety of emergency personnel, the public, and media. If followed, such precautions should lessen the need – if not the likelihood – for an aggrieved party to seek legal recourse for alleged defamation.

Broadcastify is a perfect example of why the most important element of the debate is the need for specific rules. Though Broadcastify did eventually cut off the flow of scanner information to Twitterers, it was only done after several innocents had already suffered the consequences of false accusation.

* * *

It generally doesn’t hurt that scanners are public. The law states that any criminal in possession of a scanner during the commission of a crime has an increased punishment, to stop them from using dispatch information to make their illegal activities easier, and the most sensitive information isn’t exchanged via these channels. But scanners are assumed to be at best a vital part of law enforcement transparency, and at worst harmless, or even funny. They’re for people like these guys to get their “personal safety, neighborhood crime awareness, emergency preparedness, and excitement!” It’s only now, with the unholy combination of a massive crime story and a relentless need for new information, that police scan dispatches are elevated to the status of unimpeachable, insider fact.

So now we’re reduced to the Boston Police Department having to issue a tweet with the hashtag #MediaAlert to tell us what a police scanner is and when to shut up about it. There’s no law that says Broadcastify had to stop broadcasting the feed that led to an innocent kid from Pennsylvania, among many others, becoming national terrorist suspects. We need some sort of guidance to respond to the increased desire and outlets for information.

Update the detailed information about How The Public Cloud Drives Emerging Technologies on the Hatcungthantuong.com website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!