Trending March 2024 # Our Interview With Malcolm Duckett About Magiq Dynamic Personalization # Suggested April 2024 # Top 10 Popular

You are reading the article Our Interview With Malcolm Duckett About Magiq Dynamic Personalization updated in March 2024 on the website Hatcungthantuong.com. We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested April 2024 Our Interview With Malcolm Duckett About Magiq Dynamic Personalization

Ultimately, these software tools should help customers too. Website personalization or behavioural targeting tools are an interesting class of tools since they can benefit both companies and customers, yet there are privacy concerns about how the data is collected is used.

This category of website personalization and behavioural targeting tools is also interesting since everyone still quotes the Amazon personalized recommendation “collaborative filtering” technique – read paper describing the algorithm. Most are still wowed by the accuracy of the technique (apart from when buying gifts for others…), but it doesn’t seem to give rise to too many privacy complaints.

Magiq dynamic website personalization software

When I bumped into Malcolm Duckett at the last Econsultancy Masterclass presenting Magiq I was interested to know more. I know Malcolm from many meetings at the Emetrics Marketing Optimization summit where he was .

Here’s my interview with Malcolm which presents the benefits and issues of managing this type of software.

Benefits of dynamic personlisation software Q1 Please give some example of how Magiq is being used and the benefits it offers?

Two of our clients are using Magiq in quite different ways:

Hotel Reservation Service are using Magiq to build customer databases from the interactions of the visitor on their web site. Magiq records how users interact with the site and logs key information  about their behavior.  This includes information about their physical location (town/country), their organization, any search terms they have used (in reaching the site), other information which helps to characterize their interest (e.g. if they have registered for a newsletter, requested additional information etc.), and it also particularly looks for e-mail addresses that they have entered into the site.

Magiq makes this information available in two forms:

1 – as information available to Magiq’s Gnomes, so that HRS can instruct the gnomes to present specific pieces of content for certain types of user. So for example if a customer has indicated that they are an SME or procurement manager, then the Gnomes can ensure that information relevant to them is injected into the pages presented.  By using the Magiq historic customer profile they can ensure that the right information is provided on each visit, even if the user does not re-identify themselves.

2 – as customer lists. Magiq provides lists of people who are being very active (indicating that they may be “ready to buy”) and would benefit from a call from their sales teams, and it also provides lists of people who look ready to defect (because they were regular users, but have stopped arriving). These lists are downloaded by HRS’s sales and customer support teams to help them target their activities and communications to the right people with the right message. Because Magiq provides all the contact information for each visitor (that they provided to the web site) together with the data that identifies the customer segment they are in this communication can be very accurate.

FBTO were one of Magiq’s early beta customers, they are an insurance company, and used Magiq’s data to personalize the home page for different segments of users. They identified 4 stages of engagement in the brand, from users showing an interest in a specific product, to users who had purchased a specific product. For each segment and for each product group they created a small banner which provided specific information for users in that segment on the product group of interest.

By making this simple change they increased their conversion rates by 15%.

They also used the data to allow their call centre to contact people who looked to be on the point of buying, but had not made their purchase online. Targeting calls in this way produced a campaign that resulted in a sale to one in every three people they called.

Personalisation features Q2 What are the differences to other similar tools or the well known Amazon personalization?

Today there are behavioral targeting tools, that display content on the basis of current behavior, but we are unaware of any solution that builds and maintains long-term customer behavior records for every individual visitor, and lets the site owner use that combination of long term and real-time behavior to personalize their communications and web page content.

Also all other approaches to personalization or recommendation require the site owner to undertake significant tagging activities to collect the data needed, and to add tags to allow content to be added. Magiq by contrast is effectively Tag-Free, only requiring the inclusion of a static script file into the pages of the site. This can be done via a one-time change to the content management system or can even be done via the web servers or load balancers for the site.

This has several implications. Firstly, deployment of the solution is almost instantaneous, and secondly, a static/flat site or a site or application which the customer is unable to change can be turned into a site with dynamic content and rich media content using Magiq very simply. Moreover, the marketing team can change the content, personalization rules and content types instantly via Magiq’s simple Apps and Gnomes.

In actuality, Magiq brings exactly the levels of personalization and creativity that Amazon has implemented, without needing to spend the millions of pounds in custom development and technology that they had to utilize to implement their solution.

In our view, Magiq is really Amazon personalization in a box!

Privacy issues of personalisation and behavioural targeting Q3 How is the privacy of the user protected and how are possible concerns addressed?

Magiq only uses the data that the users voluntarily enter into the site and link this to location and organization data derived from their IP chúng tôi it’s like the message you hear when calling a call centre that says “this call is being recorded for the purpose of customer service quality improvement”.

[Editor’s note: Here is a grab from the Maqiq site showing some of the info collected]

Magiq’s contracts also require the site owner to disclose the fact that they are collecting data and the purposes to which will be used, as we believe openness with the visitors is very important.

Magiq’s solution also go to considerable lengths to manage the privacy and security of personal information being collected. For example, all data Magiq collects from  the user is encrypted as it is captured so the transmission of any data more secure that the data normally passed form the Browser to the web site.  We also provide opt-out functions for the user, allowing people to manage their own data collection and privacy decisions.

Lastly, unlike most “on-demand” services which are “multi-tenant”, each Magiq customer has their own cloud instance – this means that each site’s data is in a separate system the site owner has exclusive access and rights to the data collected. Moreover, the Magiq subscriber can decide if they would prefer their instance to be located in the EU or USA, thus easing data management and privacy compliance.

Personlization features Q4 Which criteria should someone selecting a vendor of personalization technology consider.

Clearly this will involve the factors affecting the return on investment and cost of ownership on side and functional issues on the other.

Magiq aims to provide top-notch ROI by reducing deployment costs (via the Tag-Free deployment) and minimizing operating costs by using the latest cloud-computing technology and by allowing the user complete control over their expenditure via the Gnomes budgets. At the same time it is attempting to provide maximum return by providing usable/actionable data on the individual users and their interests, and by allowing one-to-one personalization (as opposed to the suck-it-and-see approaches provided by MVT tools and “herd-behavior” approaches of product recommendation solutions).

From a functional perspective each business needs to understand the functions they want to implement to maximize the effectiveness of their on-line marketing and communications. This will vary from business to business. Magiq’s range of simple-to-use apps attempt to provide powerful personalization, optimization and CRM solutions on normal PC and mobile platforms. Other solutions focus on features like and product recommendations and simple AB testing which are functions not provided by Magiq today.

Also the roadmap of a supplier is important as investment in solutions in this area tend to be quite long-term (especially as the value and quality of customer data available grow).So it is important to choose suppliers with long-term commitment to the market, and a vision for the future.

So for example, Magiq provides a good platform based on individual customer data, and we plan to extend the family of apps to embrace other applications like email campaigns, dynamic pricing, cross channel data integration etc. and also extend it to embrace Flash, Flex, and standalone apps created with Adobe AIR.

Assessing the value of website personalisation tools Q5 How would you recommend assessing the value of a tool like Magiq? Which KPIs should be reviewed.

Magiq is focused on the implementation side of on-line marketing, while it records all the activity it has undertaken it does not undertake the analytics part of the process, as most users already have Web Analytics or Customer Analytics and reporting tools, that we expect users to use to analyze the effectiveness of the campaigns and activities that Magiq implements.

The actual KPI’s that should be measured and tracked are clearly customer specific. So for a transactional site these might be focused on revenue, and conversion rate. For non-transactional sites other metrics are important. Some customers will choose to focus on the visitor retention and engagement factors (duration of engagement, time on site, visit frequency and latency etc.), others will want to analyze the customer data provided by Prospect and Retain to assess the value of the audience that they have in monetary or other ways.

You're reading Our Interview With Malcolm Duckett About Magiq Dynamic Personalization

How To Boost Social Media Campaigns With Personalization

We collected some of the best social media campaigns from 2024 and 2024 and analyzed how they could have been improved with personalization tactics.

We’ve seen some cool ones, but let’s dig deeper!

Social media sites like Facebook personalize their visitor’s experience in almost every possible way. They try to understand who their visitors are and what type of content would delight them in order to provide a better experience.

The Last Selfie

The campaign was much more successful than expected. WWF originally intended to reach millennials with the photos, but in the end, their message reached a much wider audience. The pictures were seen by 120 million Twitter users in one week, which is half of all active users on that social media site.

How to make it better?

This exceptional campaign could have been more moving if they showed pictures of animals that likely appealed to the users. Of course, it wouldn’t have been possible in all cases. But at least on Facebook, where the pictures were also shared many times, the portraits of the animals could have been matched with the activities or countries that the users are attached to. So for example, WWF could have shown tigers to extreme sports lovers and pandas to tai chi fans.

Band of Brands

Newcastle Brown Ale tried to crowdsource its TV ad for the most important American football event of 2024.

“Lacking the $4.5 million needed to buy 30 seconds of Big Game airtime, Newcastle decided to take a cue from the sharing economy that’s made Kickstarter, Uber, Airbnb,and Citi Bikeso popular. Our plan was simple. We’d essentially sell ad space in our ad, asking 20 to 30 scrappy brands like ours to pitch in for airtime with us, and then cram all 20 to 30 of those brands into one Big Game ad,” Newcastle wrote about its idea.

How to make it better?

Many people follow or like brands on Facebook. Based on interests or a friend’s taste, it’s also possible to predict if a brand would appeal to someone who didn’t interact with it before. The participating companies could have created different versions of the ad and shown their own versions to their fans and people who had a higher chance to engage with them.

Whole Foods

Thanks to the recently introduced autoplay function, videos now play a more important role in Facebook marketing than ever before. Maybe this is the reason why Whole Foods Market, an American supermarket chain that specializes in organic food, decided to post short how-to videos on its Facebook page a few months ago.

Most of the videos are about preparing healthy dishes, and some contain practical tips for people who love cooking. The videos became popular among the fans of the supermarket chain quickly, and many of them were played more than 10,000 times since then.

How to make it better?

Many people share information on Facebook about their food preferences. For example, a few years ago one of the most popular Facebook pages in Hungary was about a chocolate bar, and it still has 929,000 fans.

Whole Foods Market could have increased engagement if it analyzed people’s eating habits, and instead of showing the same videos to every fan, it displayed videos accordingly. What better to greet you on Facebook than your favorite dish?

Bud for buds

Budweiser created a Facebook promotion that let you buy a beer for a friend even if you couldn’t meet personally. The brand developed an app for the campaign and partnered with several bars and restaurants that accepted the coupons sent through the app.

“Beer is the original social network… Whether you’re toasting your birthday, a job promotion, an engagement, or simply the end of a long work week, we want to encourage everyone to bridge the physical and digital worlds by allowing you to send your friend a beer over Facebook,” said Lucas Herscovici, Anheuser-Busch vice president of consumer connection about the initiative in a press release.

It turned out that people like the idea of receiving beers on Facebook. Nearly all coupons sent through the app were redeemed, and three times as many Budweiser beers were sold in the participating bars than before.

How to make it better?

Beyond Utility

How to make it better?

C++ Dynamic Allocation Of Arrays With Example

What is a Dynamic Array?

A dynamic array is quite similar to a regular array, but its size is modifiable during program runtime. DynamArray elements occupy a contiguous block of memory.

Once an array has been created, its size cannot be changed. However, a dynamic array is different. A dynamic array can expand its size even after it has been filled.

During the creation of an array, it is allocated a predetermined amount of memory. This is not the case with a dynamic array as it grows its memory size by a certain factor when there is a need.

In this C++ tutorial, you will learn

Factors impacting performance of Dynamic Arrays

The array’s initial size and its growth factor determine its performance. Note the following points:

If an array has a small size and a small growth factor, it will keep on reallocating memory more often. This will reduce the performance of the array.

If an array has a large size and a large growth factor, it will have a huge chunk of unused memory. Due to this, resize operations may take longer. This will reduce the performance of the array.

The new Keyword

In C++, we can create a dynamic array using the new keyword. The number of items to be allocated is specified within a pair of square brackets. The type name should precede this. The requested number of items will be allocated.

Syntax:

The new keyword takes the following syntax:

pointer_variable = new data_type;

The pointer_variable is the name of the pointer variable.

The data_type must be a valid C++ data type.

The keyword then returns a pointer to the first item. After creating the dynamic array, we can delete it using the delete keyword.

Example 1:

using namespace std; int main() { int x, n; cout << “Enter the number of items:” << “n”; int *arr = new int(n); cout << “Enter ” << n << ” items” << endl; for (x = 0; x < n; x++) { } cout << “You entered: “; for (x = 0; x < n; x++) { cout << arr[x] << ” “; } return 0; }

Output:

Here is a screenshot of the code:

Code Explanation:

Include the iostream header file into our program to use its functions.

Include the std namespace in our program in order to use its classes without calling it.

Call the main() function. The program logic should be added within the body of the function.

Declare two integer variables x and n.

Print some text on the console prompting the user to enter the value of variable n.

Read user input from the keyboard and assigning it to variable n.

Declare an array to hold a total of n integers and assigning it to pointer variable *arr.

Print a message prompting the user to enter n number of items.

Use a for loop to create a loop variable x to iterate over the items entered by the user.

Read the elements entered by the user and storing them in the array arr.

End of the body of the for loop.

Print some text on the console.

Use a for loop to create a loop variable x to iterate over the items of the array.

Print out the values contained in the array named arr on the console.

End of the body of the for loop.

The program must return value upon successful completion.

End of the body of the main() function.

NOTE: In the above example, the user is allowed to specify any size for the array during run time. This means the array’s size is determined during runtime.

Initializing dynamically allocated arrays

It’s easy to initialize a dynamic array to 0.

Syntax:

int *array{ new int[length]{} };

In the above syntax, the length denotes the number of elements to be added to the array. Since we need to initialize the array to 0, this should be left empty.

Example 2:

using namespace std;

int main(void) {

int x;

int *array{ new int[5]{ 10, 7, 15, 3, 11 } };

cout << “Array elements: ” << endl;

for (x = 0; x < 5; x++) {

cout << array[x] << endl; }

return 0; }

Output:

Here is a screenshot of the code:

Code Explanation:

Include the iostream header file into our program to use its functions.

Include the std namespace in our program to use its classes without calling it.

Call the main() function. The program logic should be added within the body of the function.

Declare an integer variable named x.

Declare a dynamic array named array using an initializer list. The array will hold 5 integer elements. Note that we’ve not used the “=” operator between the array length and the initializer list.

Print some text on the console. The endl is a C++ keyword that means end line. It moves the cursor to the next sentence.

Use a for loop to iterate over the array elements.

Print the contents of the array named array on the console.

End of the body of the for loop.

The program must return value upon successful completion.

End of the body of the main() function.

Resizing Arrays

The length of a dynamic array is set during the allocation time.

However, C++ doesn’t have a built-in mechanism of resizing an array once it has been allocated.

You can, however, overcome this challenge by allocating a new array dynamically, copying over the elements, then erasing the old array.

Note: that this technique is prone to errors, hence, try to avoid it.

Dynamically Deleting Arrays

A dynamic array should be deleted from the computer memory once its purpose is fulfilled. The delete statement can help you accomplish this. The released memory space can then be used to hold another set of data. However, even if you do not delete the dynamic array from the computer memory, it will be deleted automatically once the program terminates.

Note:

To delete a dynamic array from the computer memory, you should use delete[], instead of delete. The [] instructs the CPU to delete multiple variables rather than one variable. The use of delete instead of delete[] when dealing with a dynamic array may result in problems. Examples of such problems include memory leaks, data corruption, crashes, etc.

Example 3:

using namespace std; int main() { int x, n; cout << “How many numbers will you type?” << “n”; int *arr = new int(n); cout << “Enter ” << n << ” numbers” << endl; for (x = 0; x < n; x++) { } cout << “You typed: “; for (x = 0; x < n; x++) { cout << arr[x] << ” “; } cout << endl; delete [] arr; return 0; }

Output:

Here is a screenshot of the code:

Code Explanation:

Include the iostream header file in our program in order to use its functions.

Include the std namespace in our program in order to use its classes without calling it.

Call the main() function. The program logic should be added within the body of the function.

Declare two variables x and n of the integer data type.

Print some text on the console. The text will ask the user to state the number of numbers they will enter.

Read user input from the keyboard. The input value will be assigned to variable n.

Declare a pointer variable *arr. The array arr will reserve some memory to store a total of n integers.

Print a message on the console prompting the user to enter n numbers.

Create a for loop and the loop variable x to iterate over the numbers entered by the user.

Read the numbers entered by the user and storing them in the array arr.

End of the body of the for loop.

Print some text on the console.

Use a for loop and the loop variable x to iterate over the contents of array arr.

Print out the values of the array arr on the console.

End of the body of the for loop.

Print an empty line on the console.

Free up the memory of the array arr.

The program will return value when it completes successfully.

End of the body of the main() function.

Summary:

Regular arrays have a fixed size. You cannot modify their size once declared.

With these types of arrays, the memory size is determined during compile time.

Dynamic arrays are different. Their sizes can be changed during runtime.

In dynamic arrays, the size is determined during runtime.

Dynamic arrays in C++ are declared using the new keyword.

We use square brackets to specify the number of items to be stored in the dynamic array.

Once done with the array, we can free up the memory using the delete operator.

Use the delete operator with [] to free the memory of all array elements.

A delete without [] frees the memory of only a single element.

There is no built-in mechanism to resize C++ arrays.

To initialize an array using a list initializer, we don’t use the “=” operator.

Exclusive Interview With Aria Nejad, In

Chasing a legal case requires meticulous planning, analysis, and detecting anomalies all along its course, which may take years, and in some cases, decades to conclude. Legal analytics is a branch of data science that is proving to be of immense help to attorneys in their legal work. Lex Machina, is a legal analytics firm that leverages analytics and data combined with attorney inputs to come up with rare legal insights. Analytics Insight has engaged in an exclusive interview with Aria Nejad, In-House Counsel, of Lex Machina.

Kindly brief us about the company, its specialization, and the services that your company offers.    

Lex Machina’s journey from a small venture-backed start-up in 2010 to its position in 2023 as the leader of the legal analytics movement demonstrates how quickly and profoundly data-driven decision-making has transformed the business and the practice of law in the United States. In short, we provide legal analytics to law firms and companies. We enable them to craft successful strategies, win cases, and close business. The practice of law is highly competitive and we help litigators gain an edge using the most accurate legal analytics in the industry. We process litigation data and reveal insights never before available about judges, lawyers, parties, and the subjects of the cases themselves. We call these insights legal analytics because analytics involves the discovery and communication of meaningful patterns in data.

With this data, for the first time, lawyers can predict the behaviors and outcomes that different legal strategies will produce. Our clients also use Lex Machina to land new clients by providing winning pitch decks that demonstrate specific subject matter expertise, familiarity with opposing parties and counsel, experience in front of a specific judge, and available bandwidth. Corporate counsel uses Lex Machina to select and manage outside counsel and set litigation strategies and tactics.

Mention some of the awards, achievements, recognitions, and clients’ feedback that you feel are notable and valuable for the company.

Lex Machina was named “Greater Bay Area Top Workplaces 2023” (The San Francisco Chronicle), “Legal Tech Company of the Year 2023″ award (CIO Review), and “2024 Legal Technology Trailblazer” (National Law Journal Trailblazer Awards, 2023), Winner of the “Media Excellence” Award for Analytics/Big Data (13th Annual Media Excellence Award, 2023), “Best Decision Management Solution” (AI Breakthrough Awards, 2023), and “Disruptor of the Year” (Changing Lawyer Awards, 2023). We consider feedback from satisfied clients as the greatest award. They often use Lex Machina to evaluate the viability of the defendant, check their legitimacy, or predict their response to litigation, and at times allow them to respond to clients’ queries with insightful inputs.

What is the edge your company has over other players in the industry?

Lex Machina focuses on outcome analytics – what happens in cases like yours – something that none of our competitors or their products specialize in. Outcome analytics include remedies, findings, case resolutions, and damages. Unless you know about the possible outcomes for a proceeding, you cannot design a winning formula for litigation success. That is something unique to Lex Machina. Since Lex Machina created legal analytics approximately ten years ago, several established legal publishers have released tools they call litigation analytics. Most of these take available court data at face value and present this information as graphs and charts, without doing the heavy lifting of reading, cleaning, categorizing, or fixing the many errors the raw litigation data contains.

Please brief us about the products/services/solutions you provide to your customers and how they get value out of it.

If the case moves forward, legal analytics can present critical insights into the behavior and track the record of the opposing law firm and its lawyers, showcasing their success rates in contract breach cases both on the plaintiff and defendant sides. You will see how long similar contract breach cases have taken in your particular venue, in front of a particular judge, and against the company your client is looking to sue. You can learn at what point the cases were terminated, such as at summary judgment or trial. Last but not least, the data can illuminate potential damages that have been awarded in similar cases so you can weigh the potential risk against a possible windfall. Timing and damages data are critical for lawyers to estimate how much a legal matter might cost.

What are your growth plans for the next 12 months?

State court expansion remains a top priority at Lex Machina. We’re adding modules on a court-by-court basis, with an emphasis on strict data quality and integrity. We currently offer legal analytics for 27 state courts, which encompasses over 3.3 million individual cases. We are proud of this key achievement in our state court journey and will continue to grow our coverage over the next year and beyond.

We recently launched our State Motion Metrics which helps users quickly assess their motion strategy and easily identify winning arguments. This is our first feature using a cutting-edge deep learning model. Lex Machina has released State Motion Metrics for the four Delaware state courts, including the Delaware Court of Chancery, as well as Los Angeles County Superior Court, with additional plans to roll out State Motion Metrics in more courts later this year.

How does your company’s rich expertise help uncover patterns with powerful analytics and machine learning?

We do the heavy lifting of reading, cleaning, categorizing, or fixing the many errors that raw litigation data contains. We use natural language processing and machine learning to enhance our litigation data. This is the first step in gathering raw information, reading and analyzing each court document, correcting erroneous data, and supplementing the missing data. We use signature block analyzers to update and add the correct law firms and attorneys to all the cases they worked on, even if they don’t appear on the face of the docket sheet.

But we go even further. To be completely confident in our analytics, we supplement our technology with the second step of human attorney review. Every single Lex Machina employee who reviews cases has a legal background. We use a cutting-edge deep learning model to create the new State Motion Metrics, and Lex Machina’s attorneys apply and test the model for accuracy and completeness.

The Importance Of Dynamic Rendering With Geoff Atkinson

Your browser does not support the audio element.

For episode 189 of The Search Engine Journal Show, I had the opportunity to interview Geoff Atkinson, Founder and CEO, Huckabuy.

Atkinson talks about dynamic rendering, how it helps search engines index JavaScript websites faster, and who can benefit from this solution.

What is dynamic rendering?

Geoff Atkinson (GA): Dynamic rendering is probably the biggest change Google’s made in maybe 10 years.

For them to actually offer, “we’ll crawl something that’s different than what the user experience is now, content and data all need to match up,” that’s big change for them.

For years they were like, “you have to have the user experience be the same thing.”

Brent Csutoras (BC): For context, anything that was different previously was considered cloaking. Right?

GA: Correct. So dynamic rendering, it’s actually a pretty straightforward concept. It started with the difference between a mobile device and a desktop.

All it means is that our URL will render differently or dynamically based on what calls it.

So if you call a webpage from your mobile device, you’re going to get one experience.

If you call one from your desktop, you’re going to get a slightly different one.

Their big change was they said, well, now you can actually give a version for us.

And really, the reason for that is around the amount of JavaScript and front end dynamic technologies that have made it difficult for them to crawl and understand a site.

They basically said, “Here’s a way for us to keep it simple. Give us a simplified version and we’ll be able to crawl and index that much more efficiently than what the user’s experiencing.”

What would be an example of what dynamic rendering would actually do?

GA: I’d say the most famous JavaScript thing that really makes Google get caught up while crawling is actually chat boxes, personalization, tracking tags that are dynamic.

As soon as they hit JavaScript, they simply can’t crawl it with their HTML crawler. And so it goes to a rendering queue and a rendering queue takes quite a bit more processing time.

And a rendering queue is literally the same technology as your Chrome browser.

It’s just executing a page fully, allowing them to come in and actually crawl that dynamic content and it takes more processing time, so if you can strip that stuff out in a dynamically rendered version.

What are the other things that dynamic rendering will do for somebody’s website that they might not get otherwise?

GA: [Y]ou could have all the content resources in the world, but if Google can’t see that actual content, what good is it doing? So we see that a lot.

I think companies have bigger indexation issues than they have any idea because it’s kind of hard. You see the crawl stats, right? And you’re like, “Oh, they’re crawling me, I’m good.”

And you see that they’re downloading information but you don’t really know exactly what they’re downloading and how much of it, are they actually accessing the stuff that you’re working on.

All those problems just get eliminated. You get almost instantaneous, all the content is being indexed and content affects rankings and rankings affect traffic.

You get a huge, pretty significant benefit if the site is pretty heavy and JavaScripts are difficult to crawl.

All of a sudden they’re going to become privy to all this new information in a very short amount of time and that’s actually going to impact rankings and traffic and all those other good things.

Why do you think the SEO community as a whole is kind of not really embraced this or that it’s not on every site?

GA: Yeah, I find that shocking. But if we just sort of take a step back and we look at marketing departments and their general skillset, like even SEO groups sometimes aren’t the most technical.

So if you think of a marketing organization, their skill set is really not technical SEO, that’s the last thing that they’re going to get to, right?

They don’t have developers working on SEO, very rarely.

And it’s a very technical problem, so you can throw tons of resources that content and link building and all those sort of more straight forward tasks and not even fully understand or fully recognize the technical problems that you have because you just don’t have that skillset on the team.

And we see that almost happen everywhere. Like even if they’re working with an agency or whoever, that technical skill set is so rare…

Within our little community it’s big, right?

But for when you step into a big internal marketing team, there’s just no one there that speaks that language.

So, I think that’s the reason is that it’s such a different hat to wear as a marketer getting into technical SEO versus managing your PPC spend or your content team or branding and messaging or social.

It’s just a totally different skillset and it’s usually missing, so I think that’s kind of why it hasn’t been adopted as quickly as we would like.

On technical SEO initiatives: how could SEOs connect and convince the developers?

GA: I think about almost every organization, think about just the SEOs you talked to and whether they feel empowered or it’s a bottleneck getting through development and it’s almost always a bottleneck…

It is like an organizational mindset that you have to get in.

Do you feel like everybody needs to have dynamic rendering?

GA: I’d say probably 60% of sites out there need it, which is a lot.

And then there’s 40% where it’s like, it’d be a nice-to-have, but it’s not going to blow your socks off.

Like you’re getting enough, it’s a really small site, maybe there’s only a hundred pages index so Google can get through it. The site doesn’t change that much.

There’s just not as much upside as some of these larger sites that are more complicated that Google is really struggling to understand them.

So there are a good number of sites that don’t necessarily need it.

Everybody could benefit, but what we find is about 60% of the internet, like really could use this solution.

Think about the number of JavaScript things that are included by business owners on their websites without thinking at all about what this does for Google crawling.

And then, of course, they’re going to be like, “Yeah, we want personalization and we want chat boxes,” and so they just throw it on there.

Meanwhile, it makes Google’s job like impossible…

What does it look like to implement dynamic rendering?

GA: So the first piece, how to do it on your own.

The crux of dynamic rendering is really the conversion of your dynamic content into flat HTML. The technical challenge is to be able to do that.

If you have content being generated through JavaScript that is important for your rankings and you want Google to be aware of it, being able to convert that into flat HTML and leveraging some sort of CDN (like Cloudflare, CloudFront or Akamai) to be able to basically load that information up really quickly and eliminate literally all the JavaScript on the page, that’s how you kind of have to go.

It’s doable for sure. We actually see some companies doing it in house, it’s kind of hard to do in house, but we see it happening.

The second piece is automation.

We’ve built that converter… we don’t actually have to have any developer look at your site. They don’t have to log in and do a bunch of work.

You literally make a DNS change and then Huckabuy takes over the bot traffic and we create this dynamic rendered version through SEO Cloud that’s flat HTML.

We have a partnership with CloudFlare that allows us to keep all this information at edge. You kind of hear that term now being used at edge SEO.

So at edge basically means it’s pre-cached and located all around the world in 200 different locations so that no matter where a bot is coming in from, they get this really lightweight and cached page…

This podcast is brought to you by Ahrefs and Opteo.

To listen to this Search Engine Show Podcast with Geoff Atkinson:

Listen to the full episode at the top of this post

Subscribe via Apple Podcasts

Sign up on IFTTT to receive an email whenever the Search Engine Journal Show RSS feed has a new episode

Listen on Stitcher, Overcast, or TuneIn

Visit our podcast archive to listen to other Search Engine Journal Show podcasts!

Image Credits

Featured Image: Paulo Bobita

Exclusive Interview With Amit Gandhi, Founder, Novelvox

 leader in Contact Centre Software solutions, NovelVox is treading fast in digital communication by redefining the way integration of multiple enterprise communication tools is applied in CRM and other client-centric domains. Its latest product, CXInfinity developed as an omnichannel conversational AI platform is the case in point. Analytics Insight has engaged in an exclusive interview with Amit Gandhi, founder, NovelVox.

1. With what mission and objectives were the company set up? In short, tell us about your journey since the inception of the company?

NovelVox is essentially an outcome of my over two-decade-long experience in the software application development space, especially in the domain of Contact Center Software Applications and Integrations. The company has,  since its inception,  been working to address the most critical contact center integration difficulties. We offer tools to integrate an industry’s core applications – a solution optimized for any specific industry segment. We developed the first Agent Desktop Designer Studio with a drag-and-drop designer in a low code model. Our solutions do not entail agents to work on one contact center platform, access client information through CRM, accept requests in a different ticketing tool, and use other technologies, as they get a comprehensive tool at their disposal right on their screen.

Years later, NovelVox came up with CXInfinity, an Omni-channel messaging and conversational AI platform suited for all major industries. It comes with core integration and pre-built use cases to ensure optimum performance.

2. What is your biggest USP of the company? 3. How do you plan to revolutionize the Indian/US market and tap the market?

If we look at the Indian and US markets scenario, we would hardly discover any difference, for both markets have similar needs. Hence, we stick to our motto of the quicker resolution, thereby intermittently introducing newer products that serve the purpose in small and big ways. It is no short of a revolution that NovelVox is currently offering the biggest integration library, enabling ready integration with more than 75 business applications, including core system integration for banking, healthcare, and credit unions, among others. We’ve enabled integrations with the industry’s major third-party applications to unify caller information while streamlining industry solutions by delivering industry-specific templates and customizations for faster responses and improved customer experience.

4. How do you see the company and the industry in the future ahead?

As the contact center market is expanding rapidly, we endeavor to continually add new products corresponding to fresh concerns and demands. With data analytics becoming increasingly popular today popular in today, every firm would like to rely on it. To obtain insight into their company’s weaknesses and strengths. This massive amount of data will necessitate cloud storage as well as computing, which is exactly what we too are working on right now. Regarding the future, chatbots and voice bots are slated to capitalize on the trend.

5. How are disruptive technologies like AI/Machine Learning/Cloud Computing impacting today’s innovation?

The havoc wreaked by Covid made us all realize how much we rely on healthcare and how important it is to have a system in place that is readily accessible when needed.

As a result, Artificial Intelligence and all of our goods will disrupt traditional sectors and infiltrate previously unimaginable venues. AI, ML, or Cloud Computing disrupt traditional industries and seep into arenas that were unthinkable just a couple of years ago.

For instance, we are already penetrating the healthcare and retail sectors and the next in line is the F&B industry. The idea is for everybody to benefit from the available innovative solutions. Hyper-Personalization within sectors such as the E-Commerce industry and New AI and ML Innovations with NLG is another innovation creating waves.

6. Which industry verticals are you currently focusing on? And what is your go-to-market strategy for the same?

For over a decade, we have been offering customized solutions for different industries, including healthcare, banking, retail, insurance, etc. For example, we enable healthcare providers to offer exceptional patient experiences and increase operational efficiency. Integrated desktop designed especially for healthcare providers gives agents access to centers such as EPIC, Cerner, Aetna, Allscripts, Telmed IQ, and other core platforms, instant access to caller information like the last appointment status, the reason for the call, and doctor’s availability.

The integrated agent desktop works seamlessly with core banking applications for banking purposes to ensure a responsive CX and enhance value. Speaking of strategy, we would reiterate that we develop ready-integration solutions on a need-based proposition specifically for different industries.

7. Kindly share your point of view on the current scenario of Big Data Analytics and its future.

Update the detailed information about Our Interview With Malcolm Duckett About Magiq Dynamic Personalization on the Hatcungthantuong.com website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!