Xiaomi Folding Phone...

Xiaomi Folding Phone...

Chinese smartphone makers Xiaomi and Oppo appear set to join Samsung and Huawei in the race to bring foldable smartphones in the market. Starting the upcoming trend with its own release, Huawei has been reported to be working on the world's first foldable smartphone. A foldable smartphone by Samsung has been a part of the rumour mill for over two years now, with the company in recent months confirming that it would be bringing the smartphone to market this year, adding it would not be a gimmick. Reports have indicated, however, that the South Korean giant is expected to launch the handset in 2019, but recent reports had suggested that it won't be called with the moniker Galaxy X. Samsung may launch the foldable handset with a Galaxy F brand name. Whatever the case be, Xiaomi and Oppo are also said to be working on their own foldable handsets for 2019.

As per an ETNews report, Xiaomi and Oppo are working with suppliers to get components of foldable phones, probably hinges and foldable displays. The report further adds that Xiaomi and Oppo will source foldable displays from Chinese suppliers. Also, LG Display has been touted as a possible supplier. The report claims that Xiaomi may be working on a fold-out design, instead of Huawei and Samsung's rumoured in-folding design. However, there is no clarity on the possible design that Oppo could be working on.

As of now, there is no information on the launch timeframe for the Xiaomi and Oppo's foldable phones. Also, it is unclear how the Chinese companies will choose to price the smartphones. Xiaomi and Oppo are known for launching phones with cheaper price tags compared to Samsung and Huawei. It is possible that Xiaomi's foldable smartphone arriving in 2019 may cost a lesser than their rivals. Currently, there is no information in regards to the design as well.

As it is right now, there are no real foldable handsets in the market. To be fair, ZTE did launch the Axon M foldable smartphone late last year, but that featured two different displays with a hinge in between. However, foldable smartphones do promise to be shaping up 2019, with at least four major releases in the works.

Uber Flying Car...


Uber’s “flying car” project Elevate came whizzing back into view today with a number of key announcements about where it will first appear, who will be working on it, and how this futuristic service will look when it ultimately takes off.
In a speech at the Web Summit in Lisbon today, Uber’s head of product Jeff Holden announced that the company is adding a third city, Los Angeles, to its list of places where it hopes to pilot its aerial taxi service by 2020. LA joins Dallas-Fort Worth and Dubai as cities announced to be working with Uber on the program.
Holden also said that Uber has signed a Space Act Agreement with NASA to create a brand-new air traffic control system to manage these low-flying, possibly autonomous aircraft. And to round it all out, Uber released a glossily produced video to demonstrate what using its aerial taxi service would look like from the perspective of a working mom who just wants to get home to her kids.
As you can see, it’s all very utopian. A passenger books the flight through her Uber app, and then ascends to a “skyport” on the roof of a nearby building. She badges through a turnstile using her smartphone — security is non-existent in this futuristic vision — and is briefly weighed to make sure she’s not too portly for Uber’s weight-conscious flying taxis.
Smiling agents wearing headsets, goggles, and Uber-branded vests lead her and several other passengers across the roof to their awaiting aircraft, which appears to be a plane-helicopter hybrid with fixed wings and tilt prop-rotors. Despite the presence of these whirling blades, no-one’s hair moves at all. During the flight, she looks out of the window with pity at all the poor souls stuck in traffic below, as she is whisked through the clouds to her gorgeous, perfect family waiting at home. The closing tagline: “Closer than you think.”

Pixel Watch...


  1. This will be the first flagship watch entirely designed and manufactured by Google. The company is yet to launch a smartwatch of its own. Earlier, it chose to partner with LG on its Watch Style and Watch Sport.
  2. The company is not working on one, but on three Wear OS devices. They are codenamed Ling, Triton and Sardine. Rather than three separate products, its more likely these are variants of the same watch that differ in terms of size and features.
  3. The watches will have GPS, LTE and VoLTE support.
  4. Some of the variants will have a heart rate sensor and the ability to track stress along with other health features. Expect more fitness coaching and guidance than we’re used to seeing. We’ve recently learned that Google is working on wearable health and fitness AI assistant to help whip you into shape. It will disect your daily habits and suggest ways to lead a healthier lifestyle. This will tie in nicely with the Pixel Watch.
  5. All variants with sport Qualcomm’s new chip the Snapdragon Wear 3100 which uses an ARM Cortex A7 architecture and Adreno 304 GPU. This is the first time since 2016 that Qualcomm has upgraded its processor. The company has already confirmed it will be out this fall. A press conference is scheduled for September 10th when we’ll learn more.
  6. Some sources suggest the watches could be used to replace passwords acting like a physical security key. The average person is said to have 19 passwords, and lets face it – lots of these are unsafe, weak repetitive choices. Its only a matter of time before passwords are replaced by key fobs, fingerprints, voice and facial recognition. No one will miss them!
  7. Expect an always on Google assistant as a key feature of the new timepiece.
  8. Looks are just as important as functionality. Unfortunately many Wear OS watches are bulky animals, designed in a way to accommodate large batteries. A stylish design of Google Pixel Watch should be on the cards.
  9. Wear OS watches are not well known for stonking battery life. However the new Qualcomm chip is said to bring with it a significant boost in battery performance. By extension, the Pixel watch should benefit from this. Battery life will of course depend on other specs. The device is likely to have added power modes, too.
Apple Car...

Apple Car...

Starting in 2014, Apple began working on "Project Titan," with upwards of 1,000 employees working on developing an electric vehicle at a secret location near its Cupertino headquarters. Internal strife, leadership issues, and other problems impacted the car project, with rumors suggesting Apple in 2016 shelved plans for a car for foreseeable future.
Apple reportedly laid off hundreds of employees who were working on the project, and under the leadership of Bob Mansfield, Apple is said to have transitioned to building an autonomous driving system rather than a full car, which could potentially be used in the cars of various partner companies.
Though multiple rumors have suggested Apple has shifted its focus to autonomous driving software rather than a full-on car, the August 2018 rehiring of Tesla engineer Doug Fields has led to speculation that Apple may again be exploring a car option.
Reliable Apple analyst Ming-Chi Kuo also believes that Apple is still working on an Apple Car that will launch between 2023 and 2025. Kuo believes the car will be Apple's "next star product" with Apple able to offer "better integration of hardware, software and services" than potential competitors in the automotive market.
In June of 2017, Apple CEO Tim Cook spoke publicly about Apple's work on autonomous driving software, confirming the company's work in a rare candid moment. Apple doesn't often share details on what it's working on, but when it comes to the car software, it's harder to keep quiet because of regulations.
"We're focusing on autonomous systems. It's a core technology that we view as very important. We sort of see it as the mother of all AI projects. It's probably one of the most difficult AI projects actually to work on." --Apple CEO Tim Cook on Apple's plans in the car space.
In early 2017, Apple has been granted a permit from the California DMV to test self-driving vehicles on public roads, and is testing its self-driving car software platform in several 2015 Lexus RX450h SUVs leased from Hertz. The SUVs have been spotted out on the road with a host of sensors and cameras starting in April.
Apple has several of the Lexus SUVs outfitted with a range of different sensors running its self-driving software. New LIDAR equipment was spotted in August of 2017, and Apple has been significantly ramping up its fleet in 2018. As of May 2018, Apple has 62 vehicles out on the road using its autonomous driving software.
Robotic Process Automation...

Robotic Process Automation...

Robotic process automation (RPA) is an emerging form of business process automation technology based on the notion of software robots or artificial intelligence (AI) workers.
In traditional workflow automation tools, a software developer produces a list of actions to automate a task and interface to the back-end system using internal application programming interfaces (APIs) or dedicated scripting language. In contrast, RPA systems develop the action list by watching the user perform that task in the application's graphical user interface (GUI), and then perform the automation by repeating those tasks directly in the GUI. This can lower the barrier to use of automation in products that might not otherwise feature APIs for this purpose.
RPA tools have strong technical similarities to graphical user interface testing tools. These tools also automate interactions with the GUI, and often do so by repeating a set of demonstration actions performed by a user. RPA tools differ from such systems including features that allow data to be handled in and between multiple applications, for instance, receiving email containing an invoice, extracting the data, and then typing that into a bookkeeping system.

Historic Evaluation

As a form of automation, the same concept has been around for a long time in the form of screen scraping but RPA is considered to be a significant technological evolution of this technique in the sense that new software platforms are emerging which are sufficiently mature, resilient, scalable and reliable to make this approach viable for use in large enterprises (who would otherwise be reluctant due to perceived risks to quality and reputation).
By way of illustration of how far the technology has developed since its early form in screen scraping, it is useful to consider the example cited in one academic study. Users of one platform at Xchanging - a UK-based global company which provides business processing, technology and procurement services across the globe - anthropomorphized their robot into a co-worker named "Poppy" and even invited "her" to the Christmas party. Such an illustration perhaps serves to demonstrate the level of intuition, engagement and ease of use of modern RPA technology platforms, that leads their users (or "trainers") to relate to them as beings rather than abstract software services. The "code free" nature of RPA (described below) is just one of a number of significant differentiating features of RPA vs. screen scraping.

Impact on Society

Academic studies project that RPA, among other technological trends, is expected to drive a new wave of productivity and efficiency gains in the global labour market. Although not directly attributable to RPA alone, Oxford University conjectures that up to 35% of all jobs may have been automated by 2035.
In a TEDx talk hosted by UCL in London, entrepreneur David Moss explains that digital labour in the form of RPA is not only likely to revolutionise the cost model of the services industry by driving the price of products and services down, but that it is likely to drive up service levels, quality of outcomes and create increased opportunity for the personalisation of services.
Meanwhile, Professor Willcocks, author of the LSE paper cited above, speaks of increased job satisfaction and intellectual stimulation, characterising the technology as having the ability to "take the robot out of the human", a reference to the notion that robots will take over the mundane and repetitive portions of people's daily workload, leaving them to be redeployed into more interpersonal roles or to concentrate on the remaining, more meaningful, portions of their day.
DevOps...

DevOps...

DevOps is the blending of tasks performed by a company's application development and systems operations teams. The term DevOps is being used in several ways. In its most broad meaning, DevOps is an operational philosophy that promotes better communication between development and operations as more elements of operations become programmable. In its most narrow interpretation, DevOps describes the part of an organization’s information technology (IT) team that creates and maintains infrastructure. The term may also be used to describe a culture that strategically looks at the entire software delivery chain, overseeing shared services and promoting the use of new development tools and best practices.  
DevOps Venn diagramTraditionally in the enterprise, the development team tested new code in an isolated development environment for quality assurance (QA) and -- if requirements were met -- released the code to operations for use. The operations team deployed the program and maintained it from that point on. One of the problems with this approach, which is known as waterfall development, is that there was usually a long time between software releasesand because the two teams worked separately, the development team was not always aware of operational roadblocks that might prevent the program from working as anticipated. 
The DevOps approach seeks to meld application development and deployment into a more streamlined process that aligns development, quality assurance (QA) and operations team efforts. This approach also shifts some of the operation team’s responsibilities back to the development team in order to facilitate continuous developmentcontinuous integrationcontinuous delivery and continuous monitoring processes. The necessity for tearing down the silos between development and operations has been expedited by the need to release code faster and more often in order to help the organization respond in a more agile manner to changing business requirements. Other drivers for breaking down the silos include the increasing use of cloud computing and advances in software-defined infrastructures, microservicescontainers and automation.

AI-powered camera...

Artificial intelligence (AI) is everywhere, and if you haven't yet got an AI-powered smartphone, you probably soon will do. Is it all just marketing hubris, or is AI in a smartphone – and particularly, in its camera – something we should all aspire to have? With the term AI increasingly being used not only in smartphones, but in all kinds of cameras, it pays to know what AI is actually doing for your photos.

What is AI?

AI is a genre of computer science that examines if we can teach a computer to think or, at least, learn. It's generally split into subsets of technology that try to emulate what humans do, such as speech recognition, voice-to-text dictation, image recognition and face scanning, computer vision, and machine learning. What’s it got to do with cameras? Computational photography and time-saving photo editing, that’s what. And voice-activation.

Voice-activated cameras

The ability for a computer to understand human speech is a form of AI, and it's been creeping onto cameras for the last few years. 
Smartphones have been offering Google Now and Siri for a few years, while Alexa is entering homes via the Amazon Echo speakers. Action cameras have jumped on that bandwagon in recent years, with the GoPro action cameras and even dash cams able to take actions when you utter simple phrases such as 'start video', 'take photo' and so on. 

AI software

AI is about new kinds of software, initially to make up for smartphones’ lack of zoom lenses. “Software is becoming more and more important for smartphones because they have a physical lack of optics, so we’ve seen the rise of computational photography that tries to replicate an optical zoom,” says imaging analyst Arun Gill, Senior Market Analyst at Futuresource Consulting. “Top-end smartphones are increasingly featuring dual-lens cameras, but the Google Pixel 2 uses a single camera lens with computational photography to replicate an optical zoom and add various effects.” 
Hybrid App....

Hybrid App....

Hybrid mobile apps are like any other apps you’ll find on your phone. They install on your device. You can find them in app stores. With them, you can play games, engage your friends through social media, take photos, track your health, and much more.
Like the websites on the internet, hybrid mobile apps are built with a combination of web technologies like HTML, CSS, and JavaScript. The key difference is that hybrid apps are hosted inside a native application that utilizes a mobile platform’s WebView. (You can think of the WebView as a chromeless browser window that’s typically configured to run fullscreen.) This enables them to access device capabilities such as the accelerometer, camera, contacts, and more. These are capabilities that are often restricted to access from inside mobile browsers. Furthermore, hybrid mobile apps can include native UI elements in situations where necessary, as evidenced by Basecamp’s approach towards hybrid mobile app development.
It can be very difficult to tell how a mobile application is built. Hybrid mobile applications are no different. A well-written hybrid app shouldn’t look or behave any differently than its native equivalent. More importantly, users don’t care either way. They simply want an application that works well. Trying to figure out if a mobile application is hybrid or native is like trying to differentiate rare grape varieties of wine. Unless you’re a sommelier or someone who really cares about it, it’s not terribly important. What matters is that the wine tastes good. The same can be said for hybrid mobile applications; so long as the application does what it’s supposed to do, who really cares how it was built? This point is underscored through an experiment we conducted where we wanted to see if people could tell the difference between a native application and a hybrid application:

Exponential growth in cloud services solutions....


Software as a Service (SaaS) opened a flexible and financially attractive door for businesses and consumers to try early cloud services. The growth of infrastructure and platform as a service (Iaas and PaaS, respectively) has expanded the number of cloud solutions available in the public and private sectors. In 2018, we expect to see many more organizations take advantage of the simplicity and high-performance the cloud guarantees.
According to a forward-looking 2016 survey on cloud services from Cisco, these solutions will continue to be deployed and used worldwide to accomplish diverse goals on an unprecedented level. 2018 will see SaaS solutions take the cake as the most highly deployed cloud service across the globe. The Cisco survey also forecasts that SaaS will account for 60% of all cloud-based workloads—a 12% increase over 2017 predictions. PaaS solutions will experience a modest five percent growth rate, while IaaS solutions are also set to increase. Given that these projections were made in 2016 and given positive performance in 2017, we can reasonably expect even greater growth in cloud services solutions than these predictions. Businesses that want to simplify operations and make it easier for their customers to access services will move more aggressively toward integrating SaaS, IaaS, and/or PaaS into their business processes.

The CDO role will grow extensively....


With the establishing of CDOs (Chief Data Officers) and other senior data professionals in top management, large organizations are changing their approach to data management. CDOs are now the driving force behind innovation and differentiation. They are in charge of revolutionizing existing business models, improving the communication of the company with the target audience, and revealing new opportunities to improve business performance. Their position in the company is relatively new, but it is quickly becoming mainstream. According to Gartner, by 2019 CDO position will be present in 90% of large organizations, but only half of them will manage to succeed. In addition to personal qualities, understanding the responsibilities, and awareness of the obstacles they might encounter, there is one more important thing that the company should do to unlock CDOs potential. Firms should consider branching the IT department into “I” and “T” separately, and CDOs should take the lead in the new group that is responsible for information management.

Cyber Security....

Cybersecurity is the protection of internet-connected systems, including hardware, software and data, from cyberattacks.
In a computing context, security comprises cybersecurity and physical security -- both are used by enterprises to protect against unauthorized access to data centers and other computerized systems. Information security, which is designed to maintain the confidentiality, integrity and availability of data, is a subset of cybersecurity.

Types of cybersecurity threats

The process of keeping up with new technologies, security trends and threat intelligence is a challenging task. However, it's necessary in order to protect information and other assets from cyberthreats, which take many forms.
  • Ransomware is a type of malware that involves an attacker locking the victim's computer system files -- typically through encryption -- and demanding a payment to decrypt and unlock them.
  • Malware is any file or program used to harm a computer user, such as worms, computer viruses, Trojan horses and spyware.
  • Social engineering is an attack that relies on human interaction to trick users into breaking security procedures in order to gain sensitive information that is typically protected.
  • Phishing is a form of fraud where fraudulent emails are sent that resemble emails from reputable sources; however, the intention of these emails is to steal sensitive data, such as credit card or login information.

Growing of NLP...

The usage of chatbots in customer service became one of the leading trends of the outgoing year. In 2018 applications will need the ability to recognize the little nuances of our speech. The users want to get a response from their software by asking questions and giving commands in natural language, and not thinking about the “right” way to ask. The development of NLP and its integration into computer programs will be one of the most exciting challenges of the 2018 year, and we have high expectations about it.
What is a simple task for a human — to understand the tone of speech, it’s emotional coloring, and double meaning — is also a difficult task for a computer that is accustomed to understanding the language of specific commands. These complex algorithms require many steps of predictions and computations, all of them must occur in the cloud and a split-second. With the help of NLP, people will be able to ask more shaded questions and receive apposite answers and, as a result, make better insights on their problems.

Google Map VPS...


With AR on, a future version of the Maps app will merge its traditional interface with a live camera view. When doing navigation, superimposed arrows will appear at each turn, making it harder to misinterpret directions. The company is even experimenting with inserting animated characters such as a fox, which would remove any doubt and make the app more entertaining. 

AR technology may also make its way into the rest of the app, for example popping up an information card when looking at a storefront.

VPS is a related feature, combining the live camera view with Google's data trove to get a better sense of position than possible with just GPS. The technology could be especially useful in dense urban areas where GPS is often blocked by skyscrapers.

A less radical additional in the works is a "For You" tab that will show nearby points of interest, with a "Your Match" feature attempting to custom-tailor recommendations. One intended use of this is sharing lists with friends instead of having to rattle off names from memory.

Deep learning will be faster and data collection better...

Nowadays, deep learning faces the challenges of data collection and the complexity of the computations. Due to the last problem, a big part of the innovation in hardware is aimed at speeding up the deep learning experiments, like new GPUs with a greater number of cores and different from today’s architecture that are now under development. According to Marc Edgar, a senior information scientist at GE Research, in the next 3–5 years, deep training will shorten the development time of software solutions from several months to several days. This will lead to better functional characteristics, increased productivity and reduced product costs.
Speaking of data collection, now almost all large firms have realized its importance and influence on the effectiveness of the work. In the coming year, companies will start using even more data, and the success will depend on the ability to combine disparate data. In 2018, companies will collect customer data via CRM, ticket systems, BMP and DMP, omnichannel platforms. Also, there is a rise in popularity in collecting data on specialized sensors like LIDAR. The integration of existing systems and the integration of all types of client data into a single information pool will definitely be a trend. Moreover, startups will continue to create new methods for gathering and using data, and therefore the costs for it will be reducing.
Self-Driving Cars

Self-Driving Cars

The stir about electric and autonomous cars has been around for a couple of years now. Big names like Volkswagen, Mercedes, Tesla, General Motors, and Google (of all companies) are pushing for driverless cars.
The goal everyone is aiming for is to solve the long-standing issue of people on the road. Yes, we are a danger to ourselves. Google’s self-driving cars have been involved in 11 minor accidents in the last 6 years, and none of them were reportedly caused by the car’s own fault.
By introducing autonomous cars on roads, cars which can compute faster than a hundred sober minds combined, road-related accidents may see a significant drop. Driverless cars will also significantly help those who do not have the capacity to drive, either because of health reasons, disabilities or old age.

New approaches to privacy and security...


 
The technological development boosts the importance of data, so hacking techniques become ever more progressive. The increase in numbers of devices connected to the internet creates more data but also makes it more vulnerable and less protected. IoT gadgets are getting more popular and widely used, yet they remain extremely insecure in terms of the data privacy. Any large enterprises are constantly under threat of hack attacks, as it happened with Uber and Verizon in 2017.
Luckily, the solutions are achievable, and this year we will see great improvements in the data protection services. Machine learning will be the most significant security trend establishing a probabilistic, predictive approach to ensuring data security. Implementing techniques like behavioral analysis enables detecting and stopping an attack capable of bypassing the static protective systems. Blockchain brought our attention to a new technology called Zero Knowledge Proof which will further develop in 2018 enabling transactions that secure users’ privacy using mathematics. Another new approach to safety is known as CARTA (Continuous adaptive risk and trust assessment). It is based on a continuous evaluation of the potential risks and the degree of trust, adapting to every situation. This applies to all business participants: from the company's developers to partners. Although our security is still vulnerable, there are promising solutions that can bring better privacy into our lives.

Augmented reality goes mainstream....


Before smartphones existed 10 years ago, most people would consider spending five hours daily staring at your phone as crazy. In 2018, the bent-neck trend will start to reverse itself.
The mobile game Pokémon Go has unleashed a billion-dollar demand for augmented reality entertainment, and major brands are taking notice. Thanks to the introduction of affordable augmented reality glasses, our phones will remain in our pockets and Heads Up Displays (HUD) will improve how we work, shop, and play.
HUDs, best known today as the instrument gauges that fighter pilots monitor on their visors or windshields, will become a standard in consumer eyeglasses. Imagine walking down the street in a foreign country, for example, and having all of the store signs instantly translated into English thanks to your trendy sunglasses.
AR will customize in-store experiences with mannequins that match your body type and display enough virtual inventory to rival any online site. Merchants will create AR experiences with their packaging so that demonstration videos can appear when you look at the product on the shelf or celebrity spokespeople can magically stand in the aisle to pitch the product. Virtual pop-up stores can be built to appear anywhere that crowds are gathered (in a stadium, a busy street corner, or even inside a subway). These non-brick and mortar retail locations will bring new opportunities for merchants to create engaging shopping experiences anywhere with accessible bandwidth.
Li-Fi, a new light-base wireless connection with data speeds 100 times that of Wi-Fi, will bring high-definition virtual objects into stores. With Li-Fi and AR, consumers can see limitless virtual inventory in store, at scale.
With just a wave of your hand, a car salesperson can change the model, color, and customized features of the car “sitting” on the dealership’s showroom floor. Combining real and virtual objects can enhance experiences for all out-of-home activities. Sports stadiums will be brought into the 21st century with personalized HUDs of players on the field. Imagine watching a live football game in the stadium and seeing personalized stats floating above the fantasy sports players you follow. When watching sports from home, AR has the potential to bring the excitement of life-size boxing matches into your living room. The real promise of AR is to bring people the information they need without having to ask for it.

The unstoppable freight train that is automation...

The more intelligent machines become, the more they can do for us. That means even more processes, decisions, functions and systems can be automated and carried out by algorithms or robots.
Eventually, a wide range of industries and jobs will be impacted by automation. However, for now, the first wave of jobs that machines are taking can be categorized using the four Ds: dull, dirty, dangerous and dear. This means humans will no longer be needed to do the jobs that machines can do faster, safer, cheaper and more accurately.
Beyond the four Ds, machines, robots and algorithms will replace – oraugment – many human jobs, including professional jobs in fields like law or accounting. From truck drivers to bricklayers to doctors, the list of jobs that are likely to be affected by automation is surprising. One estimate reckons that 47 percent of US jobs are at risk of automation.

Artificial Neural Network


In information technology (IT), a neural network is a system of hardware and/or software patterned after the operation of neurons in the human brain. Neural networks -- also called artificial neural networks -- are a variety of deep learning technology, which also falls under the umbrella of artificial intelligence, or AI.
Commercial applications of these technologies generally focus on solving complex signal processing or pattern recognition problems. Examples of significant commercial applications since 2000 include handwriting recognition for check processing, speech-to-text transcription, oil-exploration data analysis, weather prediction and facial recognition.


How artificial neural networks work

A neural network usually involves a large number of processors operating in parallel and arranged in tiers. The first tier receives the raw input information -- analogous to optic nerves in human visual processing. Each successive tier receives the output from the tier preceding it, rather than from the raw input -- in the same way neurons further from the optic nerve receive signals from those closer to it. The last tier produces the output of the system.
Each processing node has its own small sphere of knowledge, including what it has seen and any rules it was originally programmed with or developed for itself. The tiers are highly interconnected, which means each node in tier n will be connected to many nodes in tier n-1-- its inputs -- and in tier n+1, which provides input for those nodes. There may be one or multiple nodes in the output layer, from which the answer it produces can be read.
Neural networks are notable for being adaptive, which means they modify themselves as they learn from initial training and subsequent runs provide more information about the world. The most basic learning model is centered on weighting the input streams, which is how each node weights the importance of input from each of its predecessors. Inputs that contribute to getting right answers are weighted higher.

Applications of artificial neural networks

Image recognition was one of the first areas to which neural networks were successfully applied, but the technology uses have expanded to many more areas, including:
  • Chatbots
  • Natural language processing, translation and language generation
  • Stock market prediction
  • Delivery driver route planning and optimization
  • Drug discovery and development

BLOCKCHAIN


Blockchain is one of the biggest buzzwords in technology right now. But what is it? And why are all your friends and family talking about it?
Let’s start from the beginning. The first major application of blockchain technology was bitcoin which was released in 2009. Bitcoin is a cryptocurrency and the blockchain is the technology that underpins it. A cryptocurrency refers to a digital coin that runs on a blockchain.
Understanding how the blockchain works with bitcoin will allow us to see how the technology can be transferred to many other real-world use cases.
Bitcoin is the brainchild of a mysterious person or group of people known as Satoshi Nakamoto. Nobody knows the identity of Nakamoto, but their vision was laid out in a 2009 whitepaper called “Bitcoin: A Peer-to-Peer Electronic Cash System.”
The bitcoin blockchain
The blockchain behind bitcoin is a public ledger of every transaction that has taken place. It cannot be tampered with or changed retrospectively. Advocates of the technology say this makes bitcoin transactions secure and safer than current systems.
So here are a few facts about bitcoin:
  • It is not issued by a central authority.
  • There is a limit of 21 million.
  • Currently just over 17 million are in circulation.
  • The first transaction using bitcoin is widely believed to be carried out by a programmer named Laszlo Hanyecz, who spent 10,000 bitcoin on two Papa John's pizzas in 2010.
  • The identity of bitcoin creator Satoshi Nakamoto remains a mystery.
  • Bitcoin has often been used to buy illicit products such as drugs.

The Internet of Things (IoT) and how everyday devices are becoming more ‘smart’


The IoT – which encompasses smart, connected products like smart phones and smart watches –is a major contributing factor in this exponential increase in data. That’s because all these smart devices are constantly gathering data, connecting to other devices and sharing that data – all without human intervention (your Fitbit synching data to your phone, for instance).
Pretty much anything can be made smart these days. Our cars are becoming increasingly connected; by 2020, a quarter of a billion cars will be hooked up to the Internet. For our homes, there are obvious smart products like TVs,and less obvious ones, like yoga mats that trackyour Downward Dog. And, of course, many of us have voice-enabled personal assistants like Alexa – another example of an IoT device.
That’s already a lot of devices, but the IoT is just getting started. IHS has predicted there’ll be 75 billion connected devices by 2020.

Google Duplex....

Digital Centralization

Digital Centralization


Image result for digital centralization

Over the past decade, we’ve seen the debut of many different types of devices, including smartphones, tablets, smart TVs, and dozens of other “smart” appliances. We’ve also come to rely on lots of individual apps in our daily lives, including those for navigation to even changing the temperature of our house. Consumers are craving centralization; a convenient way to manage everything from as few devices and central locations as possible. Smart speakers are a good step in the right direction, but 2018 may influence the rise of something even better.

Kategori

Kategori