Xiaomi Folding Phone...

Xiaomi Folding Phone...

Chinese smartphone makers Xiaomi and Oppo appear set to join Samsung and Huawei in the race to bring foldable smartphones in the market. Starting the upcoming trend with its own release, Huawei has been reported to be working on the world's first foldable smartphone. A foldable smartphone by Samsung has been a part of the rumour mill for over two years now, with the company in recent months confirming that it would be bringing the smartphone to market this year, adding it would not be a gimmick. Reports have indicated, however, that the South Korean giant is expected to launch the handset in 2019, but recent reports had suggested that it won't be called with the moniker Galaxy X. Samsung may launch the foldable handset with a Galaxy F brand name. Whatever the case be, Xiaomi and Oppo are also said to be working on their own foldable handsets for 2019.

As per an ETNews report, Xiaomi and Oppo are working with suppliers to get components of foldable phones, probably hinges and foldable displays. The report further adds that Xiaomi and Oppo will source foldable displays from Chinese suppliers. Also, LG Display has been touted as a possible supplier. The report claims that Xiaomi may be working on a fold-out design, instead of Huawei and Samsung's rumoured in-folding design. However, there is no clarity on the possible design that Oppo could be working on.

As of now, there is no information on the launch timeframe for the Xiaomi and Oppo's foldable phones. Also, it is unclear how the Chinese companies will choose to price the smartphones. Xiaomi and Oppo are known for launching phones with cheaper price tags compared to Samsung and Huawei. It is possible that Xiaomi's foldable smartphone arriving in 2019 may cost a lesser than their rivals. Currently, there is no information in regards to the design as well.

As it is right now, there are no real foldable handsets in the market. To be fair, ZTE did launch the Axon M foldable smartphone late last year, but that featured two different displays with a hinge in between. However, foldable smartphones do promise to be shaping up 2019, with at least four major releases in the works.

Uber Flying Car...


Uber’s “flying car” project Elevate came whizzing back into view today with a number of key announcements about where it will first appear, who will be working on it, and how this futuristic service will look when it ultimately takes off.
In a speech at the Web Summit in Lisbon today, Uber’s head of product Jeff Holden announced that the company is adding a third city, Los Angeles, to its list of places where it hopes to pilot its aerial taxi service by 2020. LA joins Dallas-Fort Worth and Dubai as cities announced to be working with Uber on the program.
Holden also said that Uber has signed a Space Act Agreement with NASA to create a brand-new air traffic control system to manage these low-flying, possibly autonomous aircraft. And to round it all out, Uber released a glossily produced video to demonstrate what using its aerial taxi service would look like from the perspective of a working mom who just wants to get home to her kids.
As you can see, it’s all very utopian. A passenger books the flight through her Uber app, and then ascends to a “skyport” on the roof of a nearby building. She badges through a turnstile using her smartphone — security is non-existent in this futuristic vision — and is briefly weighed to make sure she’s not too portly for Uber’s weight-conscious flying taxis.
Smiling agents wearing headsets, goggles, and Uber-branded vests lead her and several other passengers across the roof to their awaiting aircraft, which appears to be a plane-helicopter hybrid with fixed wings and tilt prop-rotors. Despite the presence of these whirling blades, no-one’s hair moves at all. During the flight, she looks out of the window with pity at all the poor souls stuck in traffic below, as she is whisked through the clouds to her gorgeous, perfect family waiting at home. The closing tagline: “Closer than you think.”

Pixel Watch...


  1. This will be the first flagship watch entirely designed and manufactured by Google. The company is yet to launch a smartwatch of its own. Earlier, it chose to partner with LG on its Watch Style and Watch Sport.
  2. The company is not working on one, but on three Wear OS devices. They are codenamed Ling, Triton and Sardine. Rather than three separate products, its more likely these are variants of the same watch that differ in terms of size and features.
  3. The watches will have GPS, LTE and VoLTE support.
  4. Some of the variants will have a heart rate sensor and the ability to track stress along with other health features. Expect more fitness coaching and guidance than we’re used to seeing. We’ve recently learned that Google is working on wearable health and fitness AI assistant to help whip you into shape. It will disect your daily habits and suggest ways to lead a healthier lifestyle. This will tie in nicely with the Pixel Watch.
  5. All variants with sport Qualcomm’s new chip the Snapdragon Wear 3100 which uses an ARM Cortex A7 architecture and Adreno 304 GPU. This is the first time since 2016 that Qualcomm has upgraded its processor. The company has already confirmed it will be out this fall. A press conference is scheduled for September 10th when we’ll learn more.
  6. Some sources suggest the watches could be used to replace passwords acting like a physical security key. The average person is said to have 19 passwords, and lets face it – lots of these are unsafe, weak repetitive choices. Its only a matter of time before passwords are replaced by key fobs, fingerprints, voice and facial recognition. No one will miss them!
  7. Expect an always on Google assistant as a key feature of the new timepiece.
  8. Looks are just as important as functionality. Unfortunately many Wear OS watches are bulky animals, designed in a way to accommodate large batteries. A stylish design of Google Pixel Watch should be on the cards.
  9. Wear OS watches are not well known for stonking battery life. However the new Qualcomm chip is said to bring with it a significant boost in battery performance. By extension, the Pixel watch should benefit from this. Battery life will of course depend on other specs. The device is likely to have added power modes, too.
Apple Car...

Apple Car...

Starting in 2014, Apple began working on "Project Titan," with upwards of 1,000 employees working on developing an electric vehicle at a secret location near its Cupertino headquarters. Internal strife, leadership issues, and other problems impacted the car project, with rumors suggesting Apple in 2016 shelved plans for a car for foreseeable future.
Apple reportedly laid off hundreds of employees who were working on the project, and under the leadership of Bob Mansfield, Apple is said to have transitioned to building an autonomous driving system rather than a full car, which could potentially be used in the cars of various partner companies.
Though multiple rumors have suggested Apple has shifted its focus to autonomous driving software rather than a full-on car, the August 2018 rehiring of Tesla engineer Doug Fields has led to speculation that Apple may again be exploring a car option.
Reliable Apple analyst Ming-Chi Kuo also believes that Apple is still working on an Apple Car that will launch between 2023 and 2025. Kuo believes the car will be Apple's "next star product" with Apple able to offer "better integration of hardware, software and services" than potential competitors in the automotive market.
In June of 2017, Apple CEO Tim Cook spoke publicly about Apple's work on autonomous driving software, confirming the company's work in a rare candid moment. Apple doesn't often share details on what it's working on, but when it comes to the car software, it's harder to keep quiet because of regulations.
"We're focusing on autonomous systems. It's a core technology that we view as very important. We sort of see it as the mother of all AI projects. It's probably one of the most difficult AI projects actually to work on." --Apple CEO Tim Cook on Apple's plans in the car space.
In early 2017, Apple has been granted a permit from the California DMV to test self-driving vehicles on public roads, and is testing its self-driving car software platform in several 2015 Lexus RX450h SUVs leased from Hertz. The SUVs have been spotted out on the road with a host of sensors and cameras starting in April.
Apple has several of the Lexus SUVs outfitted with a range of different sensors running its self-driving software. New LIDAR equipment was spotted in August of 2017, and Apple has been significantly ramping up its fleet in 2018. As of May 2018, Apple has 62 vehicles out on the road using its autonomous driving software.
Robotic Process Automation...

Robotic Process Automation...

Robotic process automation (RPA) is an emerging form of business process automation technology based on the notion of software robots or artificial intelligence (AI) workers.
In traditional workflow automation tools, a software developer produces a list of actions to automate a task and interface to the back-end system using internal application programming interfaces (APIs) or dedicated scripting language. In contrast, RPA systems develop the action list by watching the user perform that task in the application's graphical user interface (GUI), and then perform the automation by repeating those tasks directly in the GUI. This can lower the barrier to use of automation in products that might not otherwise feature APIs for this purpose.
RPA tools have strong technical similarities to graphical user interface testing tools. These tools also automate interactions with the GUI, and often do so by repeating a set of demonstration actions performed by a user. RPA tools differ from such systems including features that allow data to be handled in and between multiple applications, for instance, receiving email containing an invoice, extracting the data, and then typing that into a bookkeeping system.

Historic Evaluation

As a form of automation, the same concept has been around for a long time in the form of screen scraping but RPA is considered to be a significant technological evolution of this technique in the sense that new software platforms are emerging which are sufficiently mature, resilient, scalable and reliable to make this approach viable for use in large enterprises (who would otherwise be reluctant due to perceived risks to quality and reputation).
By way of illustration of how far the technology has developed since its early form in screen scraping, it is useful to consider the example cited in one academic study. Users of one platform at Xchanging - a UK-based global company which provides business processing, technology and procurement services across the globe - anthropomorphized their robot into a co-worker named "Poppy" and even invited "her" to the Christmas party. Such an illustration perhaps serves to demonstrate the level of intuition, engagement and ease of use of modern RPA technology platforms, that leads their users (or "trainers") to relate to them as beings rather than abstract software services. The "code free" nature of RPA (described below) is just one of a number of significant differentiating features of RPA vs. screen scraping.

Impact on Society

Academic studies project that RPA, among other technological trends, is expected to drive a new wave of productivity and efficiency gains in the global labour market. Although not directly attributable to RPA alone, Oxford University conjectures that up to 35% of all jobs may have been automated by 2035.
In a TEDx talk hosted by UCL in London, entrepreneur David Moss explains that digital labour in the form of RPA is not only likely to revolutionise the cost model of the services industry by driving the price of products and services down, but that it is likely to drive up service levels, quality of outcomes and create increased opportunity for the personalisation of services.
Meanwhile, Professor Willcocks, author of the LSE paper cited above, speaks of increased job satisfaction and intellectual stimulation, characterising the technology as having the ability to "take the robot out of the human", a reference to the notion that robots will take over the mundane and repetitive portions of people's daily workload, leaving them to be redeployed into more interpersonal roles or to concentrate on the remaining, more meaningful, portions of their day.
DevOps...

DevOps...

DevOps is the blending of tasks performed by a company's application development and systems operations teams. The term DevOps is being used in several ways. In its most broad meaning, DevOps is an operational philosophy that promotes better communication between development and operations as more elements of operations become programmable. In its most narrow interpretation, DevOps describes the part of an organization’s information technology (IT) team that creates and maintains infrastructure. The term may also be used to describe a culture that strategically looks at the entire software delivery chain, overseeing shared services and promoting the use of new development tools and best practices.  
DevOps Venn diagramTraditionally in the enterprise, the development team tested new code in an isolated development environment for quality assurance (QA) and -- if requirements were met -- released the code to operations for use. The operations team deployed the program and maintained it from that point on. One of the problems with this approach, which is known as waterfall development, is that there was usually a long time between software releasesand because the two teams worked separately, the development team was not always aware of operational roadblocks that might prevent the program from working as anticipated. 
The DevOps approach seeks to meld application development and deployment into a more streamlined process that aligns development, quality assurance (QA) and operations team efforts. This approach also shifts some of the operation team’s responsibilities back to the development team in order to facilitate continuous developmentcontinuous integrationcontinuous delivery and continuous monitoring processes. The necessity for tearing down the silos between development and operations has been expedited by the need to release code faster and more often in order to help the organization respond in a more agile manner to changing business requirements. Other drivers for breaking down the silos include the increasing use of cloud computing and advances in software-defined infrastructures, microservicescontainers and automation.

AI-powered camera...

Artificial intelligence (AI) is everywhere, and if you haven't yet got an AI-powered smartphone, you probably soon will do. Is it all just marketing hubris, or is AI in a smartphone – and particularly, in its camera – something we should all aspire to have? With the term AI increasingly being used not only in smartphones, but in all kinds of cameras, it pays to know what AI is actually doing for your photos.

What is AI?

AI is a genre of computer science that examines if we can teach a computer to think or, at least, learn. It's generally split into subsets of technology that try to emulate what humans do, such as speech recognition, voice-to-text dictation, image recognition and face scanning, computer vision, and machine learning. What’s it got to do with cameras? Computational photography and time-saving photo editing, that’s what. And voice-activation.

Voice-activated cameras

The ability for a computer to understand human speech is a form of AI, and it's been creeping onto cameras for the last few years. 
Smartphones have been offering Google Now and Siri for a few years, while Alexa is entering homes via the Amazon Echo speakers. Action cameras have jumped on that bandwagon in recent years, with the GoPro action cameras and even dash cams able to take actions when you utter simple phrases such as 'start video', 'take photo' and so on. 

AI software

AI is about new kinds of software, initially to make up for smartphones’ lack of zoom lenses. “Software is becoming more and more important for smartphones because they have a physical lack of optics, so we’ve seen the rise of computational photography that tries to replicate an optical zoom,” says imaging analyst Arun Gill, Senior Market Analyst at Futuresource Consulting. “Top-end smartphones are increasingly featuring dual-lens cameras, but the Google Pixel 2 uses a single camera lens with computational photography to replicate an optical zoom and add various effects.” 
Hybrid App....

Hybrid App....

Hybrid mobile apps are like any other apps you’ll find on your phone. They install on your device. You can find them in app stores. With them, you can play games, engage your friends through social media, take photos, track your health, and much more.
Like the websites on the internet, hybrid mobile apps are built with a combination of web technologies like HTML, CSS, and JavaScript. The key difference is that hybrid apps are hosted inside a native application that utilizes a mobile platform’s WebView. (You can think of the WebView as a chromeless browser window that’s typically configured to run fullscreen.) This enables them to access device capabilities such as the accelerometer, camera, contacts, and more. These are capabilities that are often restricted to access from inside mobile browsers. Furthermore, hybrid mobile apps can include native UI elements in situations where necessary, as evidenced by Basecamp’s approach towards hybrid mobile app development.
It can be very difficult to tell how a mobile application is built. Hybrid mobile applications are no different. A well-written hybrid app shouldn’t look or behave any differently than its native equivalent. More importantly, users don’t care either way. They simply want an application that works well. Trying to figure out if a mobile application is hybrid or native is like trying to differentiate rare grape varieties of wine. Unless you’re a sommelier or someone who really cares about it, it’s not terribly important. What matters is that the wine tastes good. The same can be said for hybrid mobile applications; so long as the application does what it’s supposed to do, who really cares how it was built? This point is underscored through an experiment we conducted where we wanted to see if people could tell the difference between a native application and a hybrid application:

Kategori

Kategori