Author: Abhijit Asad

History repeats itself as streaming platforms take on the roles of TV channels

With proper broadband internet access hitting mainstream availability worldwide, we have also witnessed the rise of streaming video content beyond ‘casual’ platforms such as YouTube and certain unsavoury domains of the internet. Faster internet connections have allowed entire movies and what used to be referred to as ‘television programming’ to become accessible over the internet in high-quality streamable formats. The conveniences offered by such services are rather obvious – not only do they eliminate the need for viewers to be tied to a television channel’s program schedule, unpausability and relentless barrage of commercials, but they also allow them to enjoy a vast number of shows of their choice – a veritable all-you-can-eat buffet of shows – at their own pace and timing of convenience. These shows can be watched on any smart device of their choice, without requiring the viewers to own a physical copy of the program on any medium, or to download and store a local copy of the videos on their respective devices.

Netflix was the first major player in the streaming game, rapidly establishing its service as a must-have by not only offering a vast number of shows both new and old, but also by eventually coming up with a whole host of ‘Netflix originals’ – shows and movies made with the authorization and support of Netflix, available for viewing exclusively on the platform. The platform even breathed new life into certain high-potential shows by renewing them past their original cancellation or hiatus periods. For a vast number of people, Netflix managed to supplant cable TV services along with their myriad of pitfalls and inconveniences. The benefits of streaming media were far too many to be ignored, and many users, who did not care for regular TV programming, realized that this was all they needed.
Although competitors were slow to emerge, they did show up eventually. Amazon Prime Video, Hulu, HBO Now and Apple TV made their appearances one by one, with enticing new exclusive offerings of their own, each with their own star-studded casts and crews. With each platform loudly touting their exclusives and new arrivals with great aplomb in their attempt to attract subscribers, the choice to subscribe to Netflix was suddenly not the most obvious one, and anyone who wanted to keep track of specific shows across different platforms discovered that they needed simultaneous subscriptions to each streaming service – a prospect that was not only inconvenient but also disturbingly expensive. And now, with the arrival of Disney’s own streaming service, Disney+, the viewers’ choices aren’t getting any easier to make.

Unlike cable TV, which offers a vast number of ready-to-watch channels of many different genres, along with a handful of premium pay-per-view options, the catalogues of each streaming service is designed to cater to the requirements of any viewer under its umbrella. But while their variety of content is massive, the fact that the tastes of many a viewer would span the contents of multiple services continues to remain a constant challenge and a persistent hurdle to be overcome. The challenge, of course, is the fact that these services aren’t exactly cheap, especially when subscribed to together, and the total bills rack up quite quickly. For example, if someone wanted to watch ‘Stranger Things’ on Netflix, to check out ‘The Boys’ on Amazon Prime Video, and maybe sneak an occasional peek at Disney+’s ‘The Mandalorian’, they would need to pay for Netflix, Amazon Prime Video and Disney+ all at once, something that naturally would not make sense for a lot of budget-limited wallets, and would inevitably lead to an either-or scenario for many viewers.

Netflix still continues to enjoy their first mover’s advantage in this sector, with its brand awareness growing all the time, and the highest number of subscribers, catering to 15.8% of video viewers around the world. Its repertoire of offerings is growing at a rapid pace, and the entities in charge of its content are also stepping up their game to fully take advantage of the added recognition the streaming service occurs. It is interesting to note that even many nations which are considered havens of piracy – ours included – have welcomed the arrival of streaming platforms, simply because of their instant availability (with no wait time) and lack of necessity for on-device storage – and even among them, Netflix has already made its mark. It is highly likely, though, that Disney+, with its own vast content library (which includes their entire repertoire of glorious classics that have survived the test of time and nostalgia, as well as newer productions, and a few fresh new exclusives), will quickly prove itself to be a formidable rival to the streaming titan.

However, challenges do exist in this crowded battlefield. With each service requiring their own app, many non-techie customers would consider it a hassle to maintain them all, at least initially. Right now, no option exists which can unify the contents of the streaming services under a common umbrella and make them accessible from within a single interface. While every streaming company is devoting vast amounts of resources behind their research to create combinations of offerings that would make it very difficult for a subscriber to simply give up on it and move to another service. Therefore, because of the added expenses of having to subscribe to multiple services, many viewers are simply moving or reverting to pirating shows of their choice, which is technically the worst-case scenario envisionable by every player in this arena – no one would profit from such a scenario, unless some way emerges to combine the streaming services and/or make them more affordable.

The definition of video-based home entertainment has already undergone a massive paradigm shift over the last decade, and the intense competition is a welcome development, as such competition always comes to benefit the consumer. But as the streaming wars intensify, it still remains to be seen which services would be left standing at the end of it all.

Asus’s stunning new ZenBook Pro Duo takes multitasking to new dimensions

Anyone who has ever used a laptop for productivity purposes on the go – be it for something as prosaic as crunching figures in Microsoft Excel, or as complex as trying to work on a CAD model, or as delicate as editing a video – has inevitably run into the frustrating lack of visual real estate on the device. While high-resolution laptop screens, going as high as 4K, are quite prevalent nowadays, more often than not, they are limited by the size of the device, which prevents their high resolutions from being fully utilized. Because of the immense flexibility offered by desktop computers, it should come as no surprise that many power users still prefer using them, often equipped with multiple monitors, as their primary workstations, resorting to laptops only when forced to do so.

There have been numerous attempts at creating laptops with more than one display, with none too encouraging results. Swapping out the keyboard and touchpad with another full-sized display was the most obvious path, but any users who braved to pay the premium prices for such devices realized far too late that their lack of ‘serious’ input options made them less suited for work. As a result, these laptops never really made it to the mainstream. However, with their new ZenBook Pro Duo, Taiwanese technology titan Asus manages to successfully create a portable computer that opens up new horizons of input options and productivity without cutting back on the essentials.

While the idea of a second screen on a laptop may not seem particularly original, Asus’s implementation of it certainly is. Instead of going with a gimmicky and nearly useless mini-screen like the Touch Bar of post-2016 MacBook Pros, or replacing the entire lower deck of the laptop with a full-sized screen, Asus wisely moved the touchpad to the right of the keyboard, and used the freed-up space in front of the keyboard to place a secondary display – called the ScreenPad Plus – of the same width as that of the laptop’s main display, but having approximately half the height. While the usefulness of such a screen may seem dubious, the difference it makes in terms of productivity is nothing short of astounding, even more so because it did not come at the expense of the keyboard or the touchpad.

For starters, the ZenBook Pro Duo’s 15.6” main display is already stunning. Framed by thin bezels, not only does it have a staggering resolution of 4K, but it is also one of the few rare laptop displays to utilize OLED technology, something mostly seen only on certain mobile phones, typically higher-end ones. For non-techie users, this means that this display is capable of displaying the most vibrant colours and the purest of blacks while managing to showcase an astonishing range of contrast in the process. Any professional, particularly ones working in the fields of graphic design, motion graphics or video editing, would be blown away by the sheer fidelity of this display – to say nothing about enjoying 4K movies in glorious HDR colours. The ScreenPad Plus works seamlessly with the primary display, and it can be used both as a dedicated display or as a vertical extension of the primary one.

It must be mentioned here that both the displays support touch input, but it is the ScreenPad Plus on which this feature shines the most. Asus has worked closely with Microsoft to create a user experience that is truly unique. The ScreenPad Plus’s custom software allows for the quick pinning and tiling of groups of up to three preset program windows to it as per the user’s preferences, and even allows for quick switching of tiled app groups between the primary and secondary displays at the touch of a key when the app windows on the ScreenPad Plus require greater attention. Even mundanely prosaic tasks, such as moving between a word processor, spreadsheet and presentation windows when compiling reports, become immensely enjoyable activities by eliminating the need to switch between the programs constantly.

Asus has successfully walked the fine line between ease of use and customizability when designing the ScreenPad Plus and its associated software, and the amount of research that has gone into it is clearly evident from the results. The ScreenPad plus can be used in a variety of ways for different programs. For example, it can be used to display the virtual mixing console of an audio engineer while the main display shows the audio tracks, allowing the user to manipulate the knobs and sliders directly by touch, or it can be used to display colour panels and tool panels while running Photoshop. It can even be used as a dedicated panel of buttons that can playback recorded keyboard commands for individual programs.

What makes this device even more suitable for creative purposes is both its displays natively support pressure-sensitive stylus input using the bundled Asus Pen. Keeping user ergonomics in mind, Asus has thoughtfully taken care to include a separate palm rest for the ZenBook Pro Duo, since the laptop lacks a built-in one, making the keyboard much more comfortable for natural use. It is interesting to note that the touchpad also doubles as a numeric keypad for the keyboard, with the numbers coming alive in the illuminated form in a very science fiction-like fashion. The Harman Kardon speakers are outstanding, even more so when considering their minuscule size. The 71Whr battery, while formidable, is one of the less exciting aspects of the device, offering rather average battery life – however, given the ridiculous number of pixels which the device has to push every second, it’s quite a wonder that it holds out as well as it does.

While the screens of the ZenBook Pro Duo end up stealing most of the spotlight, the rest of it is nothing to sneeze at. Its beautiful space-age design pulses with the combined might of an almost obscenely powerful 9th-generation Intel Core i9 eight-core processor, a staggering 32GB of RAM, a blisteringly fast 1TB PCIe SSD and a powerful ray-tracing-capable GeForce RTX 2060 GPU, this laptop is ready to handle pretty much any challenge thrown at it, be it heavy-duty video editing or running the latest AAA game titles. It is also one of the first laptops to come with Wi-Fi 6, the latest generation of the connectivity standard. It even supports Thunderbolt 3 out of the box for providing 4K video output to external monitors or for connecting high-end peripherals that require the highest data transfer rates.

All in all, it is safe to say that while the ZenBook Pro Duo’s design may not become mainstream anytime soon, it offers an exciting glimpse into the future of laptop design as well as that of productivity on the go, proving once again that breakthroughs can often emerge in the most unexpected of forms in the world of technology.


Wi-Fi is everywhere. Be it our homes, our offices or even the restaurants and public places we frequent, it is impossible to imagine the world of today functioning without wireless networking. Emerging near the very end of the last millennia, Wi-Fi technology has undergone a fair share of revisions, and right now, we are looking at the arrival of the sixth iteration of Wi-Fi. Unlike its predecessors, which only bore confusing esoteric-sounding names like IEEE 802.11b, 802.11g, 802.11n and 802.11ac, the latest generation of Wi-Fi, formally known as IEEE 802.11ax, has been specially blessed with a simple and elegant name – Wi-Fi 6.

The new type of nomenclature is not accidental or unintentional – the Wi-Fi Alliance, the authority which is responsible for defining the standards of Wi-Fi technology and interoperability, has deemed it an update that is major enough to cause a paradigm shift in terms of speed and features, thus meriting a new kind of name. Many new devices – routers, laptops, phones – that have been revealed this year at various consumer electronics expos and the like have been certified to be compatible with Wi-Fi 6, capable of taking advantage of all the bells and whistles of the new technology. Samsung’s new flagship smartphone, the mighty Galaxy S10, is one of the first Wi-Fi 6-compliant devices to have hit the market.

Wi-Fi 6 is going to be the fastest version of Wi-Fi to date, optimized to transfer data at blazing speeds over 2.4GHz and 5GHz bands simultaneously at speeds as high as 10 gigabits per second, according to Edgar Figueroa, the CEO and President of the Wi-Fi Alliance. Inlay terms, this means that Wi-Fi 6 can be used to send or receive over a gigabyte of data in a single second, which is more than four times the throughput and capacity that was possible using the previous iterations of Wi-Fi. Wi-Fi undergoes a generational leap every five years or so, but a speed gain of this magnitude is as unprecedented as it is stunning. Wi-Fi 6 also improves upon system efficiency, reducing power usage significantly and helping to prolong battery lives of all supported devices. Wi-Fi 6 is also going to be a boon for gamers and the like, with reduced latency that would translate into even faster response times in games and full-duplex communications.

However, blistering speed is far from the only thing that Wi-Fi 6 has to offer. Wi-Fi 6 also supports two new technologies called Orthogonal Frequency Division Multiple Access (OFDMA) and multi-user, multiple-input, multiple-output (MU-MIMO). While their names may not mean much to most people, these imbue Wi-Fi 6 with a degree of robustness that allows it to efficiently support the simultaneous presence and operation of four times as many devices, compared to its previous generations, on the same network. Wi-Fi 6 has been built for a world that is filled to the brim with phones, tablets, televisions and every other kind of Wi-Fi-driven electronic device, giving them a rock-solid infrastructure that would not collapse under a load of their collective information exchange.


Wi-Fi 6 will also sport a degree of ‘spatial awareness’. Traditionally, it has long been an issue that as a device moves further away from the source of the Wi-Fi signal, typically a router or extender, it begins to lose strength and eventually cuts out completely. However, a router or extender equipped with Wi-Fi 6 would be fully aware of the relative location and proximity of any device connected to it, and depending on its position, would send out signals that are of variable strengths. For example, a low-power signal would suffice adequately for a device that is located within a few feet of the router in question, but a device teetering at the edge of its functional range would receive a strong signal that would ensure that it does not suffer from gradually fading signals. Granted, this still means that going outside the functional range of the signal source would still cause the signal to fade eventually, but it also means that the functional range would sport uniform performance across its entirely, and its range would be utilized better than ever. This would be achievable even with many devices connected to a single source, with the help of a new ‘scheduling’ technology that would be able to manage a dozen separate streams at a time, ensuring that the devices all receive adequate signal priority.

Wi-Fi 6 is also heavily beefed up in terms of security. It boasts an entirely new security standard called WPA3, which is an upgrade from the WPA2 security framework used in previous versions of Wi-Fi. Not only does WPA3 boast far superior encryption features, but it also offers so on an individual level, making sure that every device connected to the network is protected against attacks from each other.

If you are worried that your current devices might be rendered obsolete by the eventual arrival of Wi-Fi 6, fear not. Firstly, Wi-Fi 6 is still going to take a while to arrive and becoming properly mainstream, and secondly, even when it does, it would still be fully backward-compatible with older Wi-Fi standards. A phone or computer that doesn’t support Wi-Fi 6 would still be able to connect to a Wi-Fi 6-certified router, even though a device would require to be fully Wi-Fi 6-compliant to take full advantage of its feature set. Wi-Fi 6 routers are also likely to be relatively more expensive, at least initially, with the feature only appearing on the most high-end of routers, but it isn’t going to stay stuck that way.

When can we expect to see Wi-Fi 6? Well, it is not likely not become mainstream before a couple of years. However, with its official certification program well underway, and hardware manufacturers doing everything in their power to ensure the inclusion of the latest and greatest iteration of Wi-Fi on their devices, wait for it to become ubiquitous promises to be quite an exciting one, taking us one step closer to a feature where connectivity would be simpler, faster and more ubiquitous than ever.

Xiaomi’s journey from the heart of China to the world

For a company that is less than a decade old, Xiaomi’s list of achievements and accolades is extraordinarily long. In a relatively short time, it has become an incredibly formidable force in the global consumer electronics market. Founded on 6 April 2010 by entrepreneur Lei Jun, who currently serves as its chairman and CEO, and seven other co-founders, Xiaomi began its journey with the Android-based MIUI mobile operating system, and came up with its first MIUI-powered smartphone, the Mi 1, in 2011, which was the beginning of a legacy that continues to stand strong to this day. As of July 2019, Xiaomi has taken the 468th position on the elite Fortune Global 500 list – the youngest company to make this list – and is currently the world’s fourth-largest maker of smartphones.

Unlike many of its rivals and competitors, Xiaomi’s attitude toward the customer is radically different in the sense that it does not alienate its users with awe, but instead focuses on treating them like valued friends. Xiaomi’s aggressively honest pricing strategies are often reflective of this warmth. Instead of turning good technology into some kind of elusive holy grail that only the obscenely rich can afford, Xiaomi has strived to bring something to the table for everyone, regardless of their purchasing power, starting from sub-BDT 10,000 phones like the Redmi Go, and going all the way up to high-end flagship models like the new Redmi K20 Pro. Many of their products can often go toe-to-toe with similar (but far more expensive) devices from other manufacturers and still hold their own. Sure, corners are often cut and necessary sacrifices are made when the concerns about cost minimization come into play, but Xiaomi’s extraordinary attention to product design and quality control means that the cut corners are rarely missed. The philosophy of innovation runs strong in Xiaomi’s culture, and its success as a company is a testament to that fact.

Xiaomi’s global expansion began in 2014, and as of now, their products have officially made their way to over 80 countries and regions around the world, in many of which they have already secured some of the topmost spots in the market. Currently, over 40% of Xiaomi’s total revenue comes from overseas sales, particularly from countries such as India, Indonesia, Western Europe, Russia, Poland, Ukraine, and the Middle East. In the second quarter of 2019 alone, XIaomi’s revenue from international markets rose to RMB 21.9 billion, showing a meteoric growth of 33.1% over the preceding quarter.

And indeed, Xiaomi’s success is not only reflected by hard numbers, but also by how it is shaping global trends in terms of product designs and features. Xiaomi has always made it a point to understand the needs and psychology of their consumers and work their findings into their products and services. Xiaomi was one of the first smartphone makers to understand the importance of long battery lives, which was something most other manufacturers took a long time to catch on to. Nowadays, it is pretty much standard fare to find a battery with a capacity of around 4,000mAh in a smartphone, something that probably would not have managed to become mainstream without Xiaomi’s persistent inclusion of high-capacity batteries in their sets, as a major smartphone manufacturer. When Apple did away with the 3.5mm headphone jack on the iPhone 7 and 7 Plus, many manufacturers rapidly followed suit, but Xiaomi has continued to buck this counterintuitive trend by boldly retaining this feature even on their premium devices, protecting the users’ audio gear from what can only be described as strategically planned obsolescence. Xiaomi was also one of the first companies to release the Mi Mix, an avant-garde smartphone with a massive full-face display – a crowning achievement of smartphone design – almost a year before the oddly similar-looking iPhone X was released to claims of supposed originality and innovation. Even Xiaomi’s reinterpretation of Android, the MIUI operating system, is jam-packed with features that one would actually use, instead of being stuffed with bloatware that slows phones down, and its superb optimization provides a perfect balance between performance and battery life.

Xiaomi’s official entrance into the Bangladeshi market took place on July 2018, with the launch of the Redmi S2, closely followed by the Mi A2 and Mi A2 Lite a month later. Before the company’s official launch in Bangladesh, numerous importers – also known as the grey market – had already started unofficially bringing in Xiaomi phones and tablets, which were well-received enough by local users to pave the way for the company’s official launch. After the official launch, the company rapidly expanded its offerings across every price segment, which enabled it to become the third-largest smartphone brand in Bangladesh in a shockingly short amount of time. When Xiaomi launched the magnificent Pocophone F1 later that year, it established a record that is yet to be broken by any other smartphone brand in Bangladesh, selling BDT 60 million worth of Pocophone F1 handsets in only 30 minutes after the sale began. And this year, with the launch of their new flagship, the mighty Redmi K20 Pro, Xiaomi has upped its game considerably, offering a superb price-performance balance that can challenge that of even the loftiest of rival flagships from competing brands.

As an integral part of its brand promise, Xiaomi Bangladesh puts heavy emphasis on providing excellent after-sales service through a total of 11 exclusive authorized Xiaomi service centers, and their numbers are expected to grow over the upcoming years. The after-sales experience is just as important as the product experience, if not more, and Xiaomi has taken it to heart by allowing consumers and potential buyers to reach out across a myriad of online and offline channels to get their queries answered. In order to maximize the effectiveness of its after-sales service, the company has also taken it as a serious mission to build and train its local team to the fullest. Xiaomi Bangladesh’s elite marketing team is constantly working hard to make it a brand of the people, for the people, and their efforts clearly show in every campaign they deploy.

Xiaomi enjoys a unique kind of fan loyalty, one that is equally grounded in hard facts and figures as it is in emotional connections.

With all its successes, it should come as no surprise that Xiaomi enjoys a massive fan base in Bangladesh, with its Facebook page ‘Mi Bangladesh’ alone boasting over 2 million fans. According to SocialBakers, Mi Bangladesh has become the fastest-growing brand page in Bangladesh. The official Mi Community Bangladesh has over 68,000 officially registered users, and it sees over 100,000 active users every month, thus being primed to serve as the foundations of an exclusive Mi Fan Club for its most devoted users. The members of this community also include a great many technology enthusiasts who have made it a point to encourage their family members, friends and peers to try out and buy Xiaomi products. And indeed, Xiaomi enjoys a unique kind of fan loyalty, one that is equally grounded in hard facts and figures as it is in emotional connections.

As part of becoming a locally beloved company, Xiaomi has never shirked away from its responsibilities toward society. During the monsoon seasons, which often brings along heavy floods in various parts of Bangladesh, the Xiaomi Bangladesh team has distributed relief to the suffering people of said regions.

Xiaomi Bangladesh’s journey, however, has not been without its share of hurdles. While Xiaomi’s official sales have almost doubled, the presence of the grey market continues to become more and more of a hindrance, as it opens up avenues for the arrival of counterfeit and illegal smartphones, and it has become a necessity for the company to encourage potential and existing consumers alike to buy genuine products from official distributors to avoid unpleasant incidents. The implementation of new Government-mandated IMEI-based phone registration procedures is expected to remedy this issue significantly.

“Xiaomi has always believed in providing innovation for everyone and will continue to do so,” says Ziauddin Chowdhury, Country General Manager of Xiaomi Bangladesh. “We are looking to build a wider fan base, and focus on giving the best to our users. We hope to bring many more products across categories to Bangladesh market in the next five years, along with delivering on our promises of bringing exceptional after-sales and honest pricing to the market.”

Xiaomi’s success story is not one built on mindless profiteering, but one that shows what a company can become when it listens to their customers and delivers what they want, in an affordable and practical form. Time and again, they have managed to cause seismic changes in the market with products and strategies that have forced their rivals to shift from the status quo in terms of design and pricing, keeping the market from becoming stagnant. As its Bangladeshi counterpart, Xiaomi Bangladesh has already shown immense promise, and we look forward to seeing what they have in store for us in the future.

Social media has never been stronger from a marketing standpoint than it is now. Its maddening ubiquity, combined with its ability to transcend and break down unscaled walls of silence, has made it one of the most formidable weapons in the arsenal of any modern marketer. However, given that social media is virtually a living, shifting medium spread across a variety of platforms, which continue to be born, to evolve and to fade into obsolescence at a breakneck pace, a marketer relying on social media needs to understand the best ways to tame it and put it to use. Therefore, without further ado, we have put together a collection of eight apps which can be used to make the social media marketing experience a rewarding one for businesses, marketers and consumers alike.

Facebook is undoubtedly the world’s most expansive social network right now, and the Facebook Page Manager app makes short work of many of the challenges faced by page owners when attempting to reach out to their target audiences. Putting all the essential settings in an easy-to-navigate interface, the Pages Manager allows page owners to monitor the growth and response analytics of their pages without needing to be tethered to a computer.

Buffer is one of the first multiplatform social media management apps, allowing users to post simultaneously across Facebook, Twitter, Instagram and LinkedIn from one window. Its intuitive design is built with the aim to save time, and it even supports direct sharing of links to external web pages from within other apps. But what makes Buffer especially invaluable for marketers is the collection of statistical data regarding the performance of each post in terms of audience response. While platforms like Facebook do offer analytical data within its own interface, Buffer provides it for all the platforms it covers, which allow for precise readjustments of marketing strategies as required.

While this app is not directly connected to social media marketing, it is nevertheless an incredibly powerful program for planning, allowing its users to take notes and create elaborate multi level checklists. It is remarkably accessible, with its mobile apps being augmented by their desktop counterparts, along with a platform-agnostic web version that runs without issues on any browser. While Google’s Keep offers a similar feature set, Evernote is far more robust and customizable, and it even allows multiple users to collaborate on a single note. The app even allows users to extract text content from photos of hand-written notes!

Trello is a purebred project management solution that relies on the ‘kanban’ schedule management system, allowing each task of a project to be assigned to an individual card, which can be filed under different lists, with assignable levels of priority and completion. The app (and its browser version)’s minimalistic nature belies its feature set, which includes deadlines, descriptions and checklists to be attached to each task card, alongside file attachments and the like. Trello also offers an excellent multi user experience, with each task card being assignable to specific users, thus vastly simplifying division and allocation of roles. While competing services such as Asana and Todoist do exist, Trello has managed to hit a very delicate sweet spot that makes it as satisfying to use as it is useful.

Google Drive
Social media marketing often involves images and videos of high quality, along with the project files, which can often be huge in size, and may also require to be shared with certain personnel. For this purpose, Google Drive is ideal. Competitors like OneDrive and Dropbox are also excellent, but their free plans can be rather stingy, a problem which Google Drive doesn’t necessarily suffer from.

Rather than clogging up personal messengers like WhatsApp or Facebook Messenger, it often makes sense to dedicate a separate messenger platform to work-related discussions in real time. There are numerous such platforms, but Slack is the most notable among them. However, as with Trello, Slack’s simple, clean interface can also be rather deceiving, given how surprisingly feature-packed and versatile it is, with support for file transfer. Slack allows conversations to be held across many channels, with each channel dedicated to a specific subject or aspect of the business, allowing specific users to discuss certain matters silently, while also remaining conveniently isolated from the sections which do not concern their involvement. The Slack app is also very much cross-platform, being available on desktop operating systems as well as mobile ones, and it can even run inside a browser.

A rather passive app compared to the other ones mentioned above, Mention is more of a ‘social listener’, which continuously remains on the watch for any mention made of the brand on the web, be it on websites, blogs or social media. It can be set to watch out for specific keywords or hashtags on the web, and report their presence right away. Mention is excellent for tracking the organic reach of any brand.

MailChimp is considered one of the best mailing list managers currently available, and its wide range of available templates and formatting options have made it invaluable for crafting newsletters or ‘mail blasts’. Aside from its desktop version, its mobile app is also quite capable, helping users to work on their mailing lists and mail campaigns while on the go. MailChimp comes in both free and paid flavours, with the paid version having a great deal more to offer. However, for smaller businesses, the free version should be perfectly sufficient.

The internet is a treasure trove of good and bad ideas, especially on the visual front. An image depicting a solidly executed visual concept can be a terrific inspiration to social media marketers, and Pinterest gives users the ability to pin it and tag it right away for future reference. Pinterest’s mobile app supports direct pinning of images from within browsers or other programs using their sharing features.


Google is upping their game with the new Pixel 3A and 3A XL

The Pixel 3 smartphone that Google came up with last year was a solid little piece of work, with an extraordinary single-lens (which is now pretty much a rarity in the world of flagship smartphones) rear-facing camera, bolstered by outstanding image-processing algorithms, that managed to best pretty much all its multi-lensed competitors. Even aside from the camera, the Pixel 3 was a pretty magnificent little machine, that was quite firmly on the expensive side of things, but it nevertheless managed to hold its own because of its excellent premium-grade hardware and bloatware-free operating system – Android at its finest. But this year, with the Pixel 3A, and its larger but mostly identically specced-out sibling, the 3A XL, Google is bringing something a lot more affordable to the table, without making a lot of compromises along the way.

The biggest game-changing feature of the Pixel 3A, as it usually has always been with the Pixel series, is its camera. For a phone priced very competitively at USD 400 (and USD 480 for the XL variant), the 3A absolutely decimates the stereotype that one needs to spend upwards of USD 600 to get a phone with a decent camera. The 3A, in fact, boasts the exact same overpowered camera that is present on the Pixel 3, with an old-style single-lens layout and a 12-megapixel sensor, and optical image stabilization to boot. However, the 3A lacks the dedicated Pixel Visual Core image processor that is present in the Pixel 3, and it uses its regular CPU for that purpose instead, as most regular phones do.

For users, the lack of a dedicated image processor only translates into slightly slower saving speeds for photos (especially if effects such as Portrait Mode or HDR are in use), as well as a longer two-second camera launch time from the lock screen. All the fancy photography features present in the Pixel 3 are also readily available on the 3A, and the camera performs admirably even in low-light conditions, easily matching even the beefy camera of the elite Galaxy S10 or iPhone XS devices that cost more than twice as much – a feat made even more impressive because the 3A’s camera has only one lens. The 8-megapixel front camera does not have a wide-angle lens, but it still manages to be quite a powerful performer that does not disappoint much. Face Unlock is not an option on this phone, but it is not a feature many users are likely to miss.

In order to maintain its low price tag, the 3A doesn’t shy away from cutting corners, but thankfully, users who would choose to buy a 3A would not miss said corners very much. For starters, Google’s decision to go with a durable black, white or purple-ish polycarbonate (read: plastic) chassis instead of a more premium metal-and-glass design makes sense, because not only does it help to keep the device’s price down, but it also makes the phone more resilient to smashes and drops.

At less than 8mm thick, the 3A is quite a svelte device, weighing in at less than 150 grams (with the 3A XL weighing about 20 grams more). The fingerprint sensor is placed centrally on the back of the phone, in the most ergonomically friendly location. The 3A also makes the very practical call of retaining the beloved 3.5mm audio jack, making the use of wired earphones a breeze. The speakers on the phone fire downward instead of toward the front, but their sound output is crisp and pleasantly loud.

The Pixel 3A runs a mid-range Qualcomm Snapdragon 670 CPU. It isn’t the fastest CPU on the market, but it isn’t the most sluggish one either. Apps sometimes take a second or two longer to fire up, but once an app is loaded, it runs smoothly with no noticeable lags. With 4 gigabytes of RAM, the 3A is quite a capable multitasker as well. It doesn’t have the blistering pace of phones with Snapdragon 845 or 855 CPUs, but it nevertheless does its job quite well without feeling cumbersome.

The screen of the Pixel 3A is pretty much standard fare. Instead of going with an edge-to-edge display, the 3A’s face retains a prominent forehead and chin that is especially noticeable on the larger XL variant, which makes it look decidedly last-gen. However, the display itself is quite formidable, with a sheet of Dragontrail Glass shielding a 1080p 5.6” 9:19 (6” 9:18 in case of the XL) OLED panel that displays richly saturated colors and deep blacks. The OLED panel also makes it possible for only a few pixels of the display to remain active at all times for displaying the time and notifications while sipping very little battery power.

The 3A’s operating system, as expected on Pixel devices, is Android at its purest, bristling with useful Google services. The 3A is also eligible for officially receiving the latest Android updates for the next three years from the time of its release. Google does not offer free original-quality photo backups to 3A users (as it does for users of the Pixel 3), but the regular high-quality backup feature is still available.

The 3A lacks support for wireless charging, and it does not officially offer water resistance of any level. However, at this price point, wireless charging is not likely a priority for most users, and most regular smartphones nowadays can shrug off all but the most unfortunate of water splashes. However, one particularly depressing feature of the 3A is its lack of support for any expandable microSD storage. The 3A is only available with 64 GB of internal storage, and not having the option to expand it can be a possible dealbreaker.

The 3A has a 3,000 mAh battery, and the 3A XL’s battery goes up to 3,700 mAh. However, thanks to Google’s excellent optimization of Android Pie, both phones manage to hold up fine against a day of heavy use. A bigger battery would have been welcome still, but it’s not anything to complain about.

The Google Pixel 3A manages to marry a fantastic camera to a ridiculously affordable price tag, and even with its compromises, makes a very compelling case for the buyer on a budget. If the lack of expandable storage and some other more esoteric features is not a problem, it is very possibly one of the best available options in this price range, and it greatly pleases this reviewer to recommend it. This is a phone that showed up fashionably late to the party, but not without turning more than just a few heads.


Why USB4 is going to revolutionize peripheral connectivity as we know

When the Universal Serial Bus, or USB as it came to be popularly known, first showed up in the mid-’90s as a bold attempt to establish a common standard of connectivity for hardware peripherals, it didn’t catch on immediately, but once the first wave of USB-compatible hardware showed up, there was no looking back. And, sure enough, it seems impossible now to imagine a world without USB. USB has undergone multiple iterative changes since its inception, leading to rapidly rising bandwidth across each generation, and, with the introduction of USB-C, has taken on a whole new reversible form factor that is not only designed to have superior usability, but to also eventually phase out the legacy USB connectors of the past (known as USB-A). And now, with the announcement of USB4 by the USB Promoter Group, it seems that USB will finally fulfill its goal of being the connectivity protocol that would bridge everything across the great peripheral divide.

Granted, this is not the first time that we have heard the hymns of a new standard that promises to unify everything. Even earlier generations of USB are already in a fair state of disarray and confusion about their individual distinctive features. However, USB4 is likely to be able to put an end to that by not only bringing forward a host of new and useful benefits, but also by being backward-compatible with all earlier generations of USB (up to the ancient 2.0), as well as being compatible with powerful protocols such as the Intel-made Thunderbolt, which was previously considered one of USB’s most fierce rivals in the technology space.

For starters, USB4 will make use of the compact reversible connector that was introduced to the world as USB-C in 2017. Given their ubiquity USB-A ports are yet to be phased out from many devices, not to mention that countless existing forms of hardware are still very much dependent on them. USB-C has been steadily growing in popularity over the last couple of years, and it has already found its way into many phones and computers, desktop and laptop systems alike. Not only is the USB-C connector a dream to use, given that its reversible nature eliminates one of the biggest gripes associated with USB-A, which was considered notorious for not being able to be plugged in on the first attempt, and requiring the user to flip the connector over and try again until it finally managed to be connected. Despite their smaller size, USB-C connectors are remarkably robust and resilient, and they can be used in conjunction with converter dongles to easily overcome compatibility issues with older devices until those are phased out altogether. USB-C connectors are also essential for implementing newer technology standards, such as the amazing USB Power Delivery, which allows it to charge connected devices at a rapid pace, providing up to a 100 watts of power.

USB4 is going to have blistering transfer speeds that would begin at 10Gbps and go up to 40Gbps, which is also the same speed as that of the Thunderbolt 3 connectivity standard. Given that both USB4 and Thunderbolt 3 make use of USB-C connectors, this also means that USB4 implementations can also be compatible with Thunderbolt 3 is supported by particular implementations (which it is most likely to be, at least on computers, if not on smaller devices). Intel has officially endorsed this compatibility by giving the Thunderbolt 3 standard to USB Promoter Group. This would be further augmented by the smart allocation of system resources, which would allow the bandwidth of USB4 to be effectively divided between the processes that require it, without causing any further inefficiency or slowdowns. However, if a USB4 device is connected to an older USB device from a previous generation, it would cause the bandwidth to drop to the maximum bandwidth of the older (and slower) device while retaining usable backward compatibility. This, in fact, makes sense, given that any process can be only as fast as the slowest component in the chain. However, this also means that all old USB cables would operate at their maximum possible speeds when connected to a USB4 interface.

The high bandwidth of USB4 makes it ideally suited for a lot of purposes, such as connecting aggressively bandwidth-hungry devices such as GPUs to a computer without sacrificing speed at any point, something that was only achievable previously using Thunderbolt 3. However, USB4 would be made available on Intel and non-Intel hardware alike by all manufacturers of such products, so the need to depend on Thunderbolt 3 for such things would be moot. USB4 would also allow for the direct transmission of HDMI/DisplayPort video data, which would make connecting compatible external displays to devices a breeze, allowing for greater productivity and superior user experience.

The biggest downer about USB4, however, is that while it is due to be released in specification form in mid-2019, it isn’t likely to show up in actual hardware before 2020 at the very least. Given that most new products have a development cycle of a year at the very least, the wait maybe even longer. However, given the promised benefits, it would indeed be something to look forward to.
Another problem with USB4 that manufacturers may consider a challenge is its higher production cost, compared to currently established USB standards, as it would require more expensive hardware components for its implementation. However, that is also a hurdle that would be eventually circumvented as the economies of scale would sooner or later fall into place.
Even in an era where wireless connectivity is starting to catch on sufficiently in the mainstream, the dependability and capacity offered by a physical wired connection remains unparalleled, and, everything considered, USB4, when it does show up to the party, would be a critical game-changer, and it is very likely to be the protocol that would finally unify the fragmented dimensions of peripheral connectivity. It just remains to be seen when that finally does come.


What to expect from the upcoming new version of Android

Naming each new version of Android after a dessert has long been a tradition/running gag at Google’s end, and it always starts with the reveal of a single letter – the first letter of the dessert’s name – and with each version, Google goes one letter down the alphabet river. Android is currently on version 9, officially codenamed Pie. Logically, the name of the next version should begin with the letter Q, but it is yet to be revealed what it is going to be named after, because not too many desserts have names starting with Q.

Despite the enigmatic nature of its name, Google has already rolled out early in-development versions of Android Q (as version 10 is being called for now) to the masses, albeit only the people who own Google Pixel phones, even the oldest ones. The software is still far from being perfected, but first impressions have been enough to reveal that it has plenty of new features to get Android diehards excited about the next sweet treat from Google.

For starters, Android Q is making some much-needed changes to the highly useful yet frustrating Share menu that has existed in Android for a long time. In order to prevent accidental sharing of wrongly selected content, the Share menu now displays a small preview of what is being shared, and app-specific sharing shortcuts can also be preloaded early on to save time.

Privacy is a major concern in the world of today, and app privacy controls are better than ever in Android Q. Access to certain features can be granted on a temporary basis that can be easily customized according to the user’s preference. For example, an app can be set to have access to the device’s location at only specific times of use. Furthermore, access to media content is being stratified into photos, video and audio categories, so that apps don’t have more access than they need to. Apps can also no longer hijack the device’s focus by automatically allowing an app to the front, but they can alert the user using a notification for these cases.

Thanks to an advanced new theme engine, Android Q is going to be the first version of Android to have visual theming options natively, allowing users to define various options, such as icon shapes, fonts and accent colours. This is joined by a heavily improved system-wide dark mode, which brings the long-coveted feature of darkening the user interface to not only suit the tastes of certain users, but also to have a substantial impact on the battery life of devices that have OLED screens. OLED displays consume little to no power when displaying dark or black elements, and are going to benefit considerably from a dark mode. However, a toggle switch for light and dark modes has not yet been made available on the current work-in-progress version.

System settings have become easier than ever to access in Android Q, with required settings being displayed in the form of a floating panel that can be quickly tapped away to denote consent or denial without having to pause one’s activities and go into system settings to change those parameters. Android Q also brings advanced controls to notifications, by allowing users to bring up notification display options by swiping a notification to the left, including the option to receive alerts for only the newest alerts when they are serially queued up.

While devices with foldable screens are the newest entrants in the mobile device scene, it is safe to assume that despite their obnoxious pricing, they are likely to catch on before long. The foldable-screen devices that have emerged in the market so far are running heavily modified versions of Android 9 Pie, but Google has made it a point to include official support for foldable-screen devices in Android Q, which means it would be far easier for manufacturers to develop foldable-screen devices without having to waste time to customize the software to accommodate the unusual display setup.

Speaking of screens, taking what seems to be a cue straight from the highly enthusiastic worldwide Android modding community, Android Q is bringing the long-awaited option to natively record the device’s screen without requiring any third-party software. It works like taking a screenshot, but it instead keeps recording whatever happens on screen in the form of a video until it is stopped. As an added aesthetic bonus, screenshots taken on devices with displays that have rounded corners now also include dark cutouts for rounded corners and display notches, accurately mirroring the shape of the display even in screenshots.

Android Q is also likely to bring a proper desktop mode to Android devices, presumably so that they can be used with external displays (or, in case of foldable-screen devices, their collapsible displays) and peripherals for productivity-centric tasks. While this feature is yet to be explored fully, it would be a boon for the users, and it can actively remove the need for full-fledged desktop or laptop computers in typical usage scenarios.

One of the best new features that have made their debut as part of Android Q is the ability to share Wi-Fi network passwords with other users in the form of a scannable QR code. While its utility may appear to be limited, it is nonetheless quick and convenient, because it saves a lot of yelling back and forth between users.

Along with these features, Android Q is bringing many minor feature updates and bugfixes to the table as well. While it cannot be said with certainty that all of these new features will make it to the final cut of Android Q, it can be safely assumed that it is going to be a substantial update, even if it is not the most groundbreaking one, and it is definitely something to look forward to for Android fans. It may take some time yet to roll out to devices, but it should nonetheless be well worth the wait.


Artificial intelligence is the future, but is it a good one?

Ever since the very idea of artificial intelligence (AI) was coined, opinions on it have been divisive, to say the least. Science fiction authors and futurists alike had and still continue to have field days about the matter, wondering in a veritable multitude of ways as to how machines capable of ‘thought’ would be able to change the world and bring about a future very different from what we are used to – be it one of glory and prosperity, or death and destruction.

Scientist and tech visionary Andrew Ng once quipped, “Artificial intelligence is the new electricity.” And indeed, just like how the advent of electricity transformed society over the course of the last century, artificial intelligence has already started to shape our view of reality in ways that most of us don’t even realize yet. While a truly ‘conscious’ and/or ‘sentient’ AI is yet to emerge, AI programs that are capable of ‘learning’ and improving upon their own processes have already been developed, and they have been put to use in a myriad of technical fields. And this time, it seems that the paradigm shift caused by the introduction of AIs to the world as we know it may have seismic repercussions if it is not carefully regulated.

Artificial intelligences have long been favourite plot points of science fiction movies and fiction, and we are all familiar with fictional big-name Hollywood AIs such as Skynet and HAL 9000, many of which are showed to rapidly reveal a malevolent personality that threatens the world at large, or at least the people around them. While AI systems of present are yet to present a personality that goes beyond crude emulations of human ones, without extensive failsafe mechanisms in place, the danger posed by militaristic use of artificial intelligences can be a very real one, for reasons that can be as simple as data bias, a lack of ‘empathy’, or simply because a machine would not process all the aspects of a threat the way a human would, and is likely to respond with countermeasures that may endanger humanity as a whole. Despite these concerns, many countries have been pouring billions of taxpayer dollars into developing AI-based weapons technology, with USA and China at the forefront. Elon Musk, one of the most prolific technology-focused entrepreneurs of our time, has gone as far as to say that AIs pose an ‘existential risk to humanity’ that simply cannot be ignored.

Even worse is the fact that AIs don’t necessarily need access to weapons systems to wreak havoc. Recently, OpenAI, an AI research firm of which Elon Musk himself was recently a member, unveiled an AI that is capable of spinning volumes of extremely convincing-sounding stories and fake news based on a few pieces of (mis)information. In a world where war using information is as critical (if not more) as war using weapons, the threat posed by such technology cannot be understated. Responsibly, OpenAI withheld the full version of the program from falling into anyone’s hands, but it is safe to say that if they have managed to develop it today, it is only a matter of time before another entity with less benevolent intentions comes up with another system that does the same job, and puts it to use. Many of the biggest names in technology are already dabbling in the field of AI, and it is safe to assume that if any of them develop such a technology, they would not be hesitant to sell it out to the highest bidder.

Speaking of warfare on the informational front, ungoverned and irresponsible AI research can also lead to the development of new computer viruses that would be capable of masking their presence by intelligently avoiding behavioral patterns that antivirus programs are designed to detect. Such programs would even be able to alter their own code to become more effective, allowing it to bypass security measures or to simply cause even more damage. More comprehensive cyber-attack AI platforms would be able to study and learn the behaviors of the users of their target machines, and deduce how and when to carry out attacks of devastating potential. Combating such threats would require the development of new AI-based antivirus and cyber defense solutions, which would not only require a massive drain of resources to be made, but would also require specialized and expensive hardware to be effectively run.

However, what appears to be scariest aspect of the AI revolution in an everyday context is that it is rapidly growing more and more complex, and its rapid adoption by companies would mean that in another decade or so, we would be having real-time conversations with AI machines without even realizing it half the time. Google has already demonstrated a working prototype of such a system. Such AI programs can be godsends to scammers and manipulators, who would be able to use them to their complete advantage. Considering that this is an age where virtually everybody is busy uploading the most intimate details of their lives into complex stratified databases in the form of social media posts, using said information with malicious intent to fool people into doing the biddings of the highest bidder is merely the next logical step. Machine learning technology, as its stands today, has already displayed prototypes of very real-looking and real-sounding news anchors who are in fact digital simulacra that can be customized to suit specific needs of clients.

While there is no concrete indication that artificial intelligences would spell certain doom for humanity, the threats they pose are very real indeed, and without proper governing and safeguarding measures, there is ample room for AI systems to be misused. The fear of what we do not fully know or understand yet is only natural, and in this case, at least until public awareness about the potential dangers of AI becomes more mainstream, that hesitation brought along by the fear may become a positive decisive factor behind the weight of their ultimate impact.


Amazon’s new Kindle Paperwhite is a must-have for serious readers

I was probably one of the first people in all my circles to move away from real books and start moving to electronic books, or ebooks, as they have been known for a while. It was quite a daunting task at that time, given the myriad of challenges and pitfalls awaiting me. The phones on which I used to get my reading done had diminutive displays with giant pixels, capable of viewing only a few lines of text at a time. The software I used for reading ranged from bug-ridden to downright unusable. And, worst of all, ebooks were not readily available anywhere aside from the seediest of piracy havens. It was painful, but the convenience of being able to read a book without having to carry around a paper brick was more than enough to keep me interested in it.

Fast-forward a dozen years, and ebooks are now all but ubiquitous. Services like Amazon’s Kindle eBooks and Apple’s iBooks have not only managed to turn ebooks into a mainstream and legitimately profitable enterprise but have also actually managed to put a sizable dent in the paper book publishing industry. Additionally, reading off screens has never been more comfortable, with even an average Chinese smartphone being endowed with a giant screen that was unimaginable in the times when I had first taken up reading ebooks. Tablets are also great for reading on the go, offering even more visual real estate. However, nothing feels more like reading a book than a dedicated ebook reader with an E Ink screen that emulates the glare-free appearance of paper to near-perfection. And in that department, Amazon’s 2018 iteration of its popular Kindle Paperwhite line of ebook readers is proving to be a smashing success in every regard.

The Kindle Paperwhite 2018’s biggest new feature is its new IPx8-rated waterproof design. This is a boon for voracious readers who don’t want to pause their reading just because they are chilling in a pool or a bathtub or soaking up some sun on a beach. The waterproofing not only ensures that the device can take its share of dunks and splashes (according to official announcements, it can survive being immersed for an hour under two meters of freshwater), but it also shows that the device is rugged enough to brave a bit of rough use.

The Paperwhite has a remarkably small form factor and weighs less than most modern smartphones at only 182 grams. At barely 8 millimeters thick, it can be slid into most bags or backpacks with ease and can be held up with minimal effort without straining the wrists. In fact, for people who enjoy reading while lying down, this ensures that accidentally dropping the Kindle on one’s face would not result in a smarting nose. However, given how well the Paperwhite feels in the hand and how easy it is to grip, it is not very easy to simply drop it by accident.

The six-inch touchscreen display of the Paperwhite has a pixel density of 300 PPI (pixels per inch) and is also capable of displaying up to 16 shades of grey, which means that any text being displayed on the screen will be wonderfully crisp, and the individual pixels would not be distinguishable unless the device is held very close to the eyes. As is the case with E Ink-based devices, the screen is devoid of glare, and it can be enjoyed to the fullest under bright sunlight or indoor lighting without having to worry about annoying reflections. The device is also perfectly suitable for reading in the dark, with its screen being evenly backlit by five white LEDs.

For some reason, however, the Paperwhite still makes use of a microUSB port for computer connectivity and charging, and it is a bit of a mystery as to why Amazon would go with microUSB as a connectivity option instead of adopting the new and rapidly spreading USB-C reversible connector. However, microUSB is still quite some ways away from becoming extinct, and finding a proper cable for the device should not be a huge problem.

On the wireless side of things, aside from the usual Wi-Fi connectivity (and cellular connectivity on certain models), the Kindle Paperwhite 2018 also brings Bluetooth to the table, allowing users to enjoy Amazon’s Audible audiobooks over Bluetooth earphones. The Paperwhite comes in two flavors when it comes to storage options, 8 GB or 32 GB. For users who prefer Audible audiobooks, the 32 GB variant is definitely a better buy, but for most regular readers, the 8 GB version (starting from USD 130) contains more than enough storage for hundreds of books.

Amazon sells a wide range of accessories for the Kindle Paperwhite, among which screen protectors and flip covers would go a long way to enhance the durability and preserve the physical integrity of the device.

Thanks to the superb efficiency of E Ink technology, the Paperwhite 2018 has remarkable battery life, lasting for several weeks on a single charge. E Ink displays drain the battery only while rendering a new page of text (e.g. when a book is loaded, a page is turned, or a menu is opened), but no charge is expended while a static page is being displayed. This has been a defining characteristic of the Kindle line of ebook readers, and the Paperwhite is no exception. The only downside to this is that it’s often far too easy to forget to charge the device on time, given that it is required so infrequently.

The software on the Kindle is simple yet elegant. Fonts, text size, margins, character spacing etc. can be adjusted quickly and easily to suit the preferences of individual users, and combinations of settings can also be saved as preset themes, giving the reader total control over the look and feel they prefer. Pages can be ‘flipped’ or ‘turned’ by simply swiping across the touchscreen. The device’s simplicity and no-nonsense approach toward reading and reading only is further enhanced by an absence of any superfluous input options aside from a wake-up button on the bottom.
While the Paperwhite 2018 isn’t the first waterproof Kindle, with last year’s Kindle Oasis being the first, it is the first truly affordable one of its kind, and it offers a distraction-free reading experience like no other. For people who cherish their reading time and want to make every second of it worthwhile, there is no better option for it for the price.