What is 5G in details

What is 5G in details
What is 5G in details

What is wearable technology,devices

What is wearable technology,devices
What is wearable technology,devices in detail

New technology

New technology
What is Self driving car?

Latest Posts

Sunday, May 14, 2023

Generalizing AI

MindBEE

 

The big picture

When OpenAI revealed ChatGPT in late 2022, people clambered to test it. They asked complicated questions, requested poems and got precisely what they wanted: in one case, instructions for removing a peanut butter sandwich from a VCR written in the style of the King James Bible.


And before ChatGPT, the internet was flooded with AI-generated art. Text-to-image generators like Stability AI’s Stable Diffusion and OpenAI’s DALL-E 2 stunned people by responding to written prompts with photorealistic images.


This generated content is part of one of the biggest step changes in the history of AI: the introduction of pretrained models with remarkable task adaptability.


It began with a landmark innovation in AI model architecture by Google researchers in 2017. Since then, tech companies and researchers have been supersizing AI by increasing the sizes of models and training sets. The result? Powerful pretrained models, often called “foundation models,” that offer unprecedented adaptability within the domains they’re trained on.


With foundation models, businesses can start to approach many tasks and challenges differently, shifting focus from building their own AI to learning to build with AI.



A foundation for intelligence breakthroughs

OpenAI’s GPT-3, released in 2020, was the largest language model in the world. It taught itself to perform tasks it had never been trained on and outperformed models that were trained on those tasks. Since then, companies like Google, Microsoft, and Meta have created their own large language models.



To define this new class of AI, researchers from the Stanford Institute for Human-Centered Artificial Intelligence coined the term “foundation model.” They generally defined them as large AI models trained on a vast quantity of data with significant downstream task adaptability.


Some are working to expand foundation models beyond language and images to include more data modalities. Meta, for instance, developed a model that learned the “language of protein” and accelerated protein structure predictions by up to sixtyfold.


Many efforts are underway to make building and deploying foundation models easier. Rapidly growing compute requirements—and the associated costs and expertise needed to handle this scale—are the biggest barriers today. And even after a model is trained, it’s expensive to run and host its downstream variations.

Saturday, March 21, 2020

Bionic Arms, another achievement

MindBEE
                   What is Bionic Arms
Bionic Arms, another achievement

Bionic arms work by picking up signals from a user's muscles. When a user puts on their bionic arm and flexes muscles in their residual limb just below their elbow; special sensors detect tiny naturally generated electric signals, and convert these into intuitive and proportional bionic hand movement.

Background

Bionic prosthetic hands are rapidly evolving. An in-depth knowledge of this field of medicine is currently only required by a small number of individuals working in highly specialist units. However, with improving technology it is likely that the demand for and application of bionic hands will continue to increase and a wider understanding will be necessary.

Methods

We review the literature and summarise the important advances in medicine, computing and engineering that have led to the development of currently available bionic hand prostheses.

Findings

The bionic limb of today has progressed greatly since the hook prostheses that were introduced centuries ago. We discuss the ways that major functions of the human hand are being replicated artificially in modern bionic hands. Despite the impressive advances bionic prostheses remain an inferior replacement to their biological counterparts. Finally we discuss some of the key areas of research that could lead to vast improvements in bionic limb functionality that may one day be able to fully replicate the biological hand or perhaps even surpass its innate capabilities.

Conclusion

It is important for the healthcare community to have an understanding of the development of bionic hands and the technology underpinning them as this area of medicine will expand.

Way to Brain-Computer Interface and Convolutional Neural Networks

MindBEE





Part 1:The big picture of brain-computer interface and AI + Research papers

Part 2:
In-depth explanation of neural networks used with BCI


DEFINATION OF Brain-Computer Interface and Convolutional Neural Networks:

Brain-Computer Interface (BCI): devices that enable its users to interact with computers by mean of brain-activity only, this activity being generally measured by ElectroEncephaloGraphy (EEG).
Electroencephalography (EEG): physiological method of choice to record the electrical activity generated by the brain via electrodes placed on the scalp surface.

Functional magnetic resonance imaging (fMRI): measures brain activity by detecting changes associated with blood flow.

Functional Near-Infrared Spectroscopy (fNIRS): the use of near-infrared spectroscopy (NIRS) for the purpose of functional neuroimaging. Using fNIRS, brain activity is measured through hemodynamic responses associated with neuron behaviour.
Convolutional Neural Network (CNN): a type of artificial neural network used in image recognition and processing that is specifically designed to process pixel data.
Visual Cortex: part of the cerebral cortex that receives and processes sensory nerve impulses from the eyes

What are these brain-computer interfaces actually capable of?



Way to Brain-Computer Interface and Convolutional Neural Networks
brain neurone is connected with computer




It depends who you ask and whether or not you are willing to undergo surgery. “For the purpose of this thought-experiment, let’s assume that healthy people will only use non-invasive BCIs, which don’t require surgery. In that case, there are currently two main technologies, fMRI and EEG. The first requires a massive machine, but the second, with consumer headsets like Emotiv and Neurosky, has actually become available to a more general audience.” 
Way to Brain-Computer Interface and Convolutional Neural Networks

Conclusion

In this paper, we have presented a novel approach that combines deep learning with the EEG2Code method to predict properties of a visual stimulus from EEG signals. We could show that a subject can use this approach in an online BCI to reach an information transfer rate (ITR) of 1237 bit/min, which makes the presented BCI system the fastest system by far. In a simulated online experiment with 500,000 targets, we could further show that the presented method allows differentiating 500,000 different stimuli based on 2 s of EEG data with an accuracy of 100% for the best subject. As the presented method can extract more information from the EEG than can be used for BCI control, we discussed a ceiling effect that shows that more powerful methods for brain signal decoding do not necessarily translate into better BCI control, at least for BCIs based on visual stimuli. Furthermore, it is important to differentiate between the performance of a method for decoding brain signals and its performance for BCI control.

Friday, March 20, 2020

Flexible display: A uniqe display type

MindBEE


              Flexible display
Flexible display: A uniqe display type
foldable display


A flexible display is an electronic visual display which is flexible in nature; as opposed to the more prevalent traditional flat screen displays used in most electronics devices.

The concept of foldable displays in tablets, Mobile, and notebooks has garnered curiosity from industry observers and some techno-geeks in recent years. So far, demand hasn’t exactly been robust, and the technology has been experiencing some growing pains.
But at least one prognosticator expects this technology to grow by leaps and bounds in coming years. A report by market research firm Display Supply Chain Consultants (DSCC), titled 2019 Foldable Display Market Update and Outlook Report, projects unit shipments of foldable display panels to rise from 360,000 in 2019 to over 68 million units by 2023, for a compound annual growth rate (CAGR) of 272%. Sales revenue from foldable displays are expected to grow at a 242% CAGR from $62 million in 2019 to $8.4 billion in 2023.
There’s a lot of development activity underway in foldable displays already. According to DSCC, Samsung, Huawei, Lenovo/Motorola, Xiaomi, Oppo, Vivo, TCL, Google, Sony, and Apple are some of the companies with roadmaps to produce foldable devices.

The foldable displays are expected to find their way into clamshells--a technology used in many mobile phones prior to smartphones. Now, clamshells are expected to make a comeback by being designed into newer smartphones. According to DSCC, smartphones with clamshell form factors offer a smaller seam, smaller hinge, higher yields, and reduced risk. Motorola, Samsung, and several other makers are slated to introduce clamshell smartphones starting next year, with clamshell technology accounting for a 60% unit share of the foldable smartphone market between 2020 and 2023.
Despite some early missteps, DSCC expects Samsung to take the lead in foldable smartphone penetration and maintain the highest share through the forecast as it looks to establish this category and introduce the most products. The second largest Huawei will be #2 until Apple enters the market which is forecasted to happen in 2022. Huawei’s upcoming Mate X is expected to have a folding display.

 Samsung to launch 2-3 products in 2020 with UTG with an aggressive folding radius while enjoying the scratch resistance and hardness of glass as well as the touch experience consumers have grown accustomed to.

Tuesday, March 17, 2020

White House Urges Researchers to Use AI to Analyse 29,000 Coronavirus Papers

MindBEE
White House Urges Researchers to Use AI to Analyse 29,000 Coronavirus Papers


The White House's Office of Science and Technology Policy on Monday challenged researchers to use artificial intelligence technology to analyse about 29,000 scholarly articles to answer key questions about the coronavirus.
The White House office said it had partnered with companies such as Microsoft and Alphabet's Google to compile the most extensive database of scholarly articles about the virus available to researchers.
The World Health Organization (WHO) and the US Centers for Disease Control and Prevention (CDC) have said they want help to better understand the origins and transmission of the coronavirus in aid of developing a vaccine and treatments. The coronavirus causes a respiratory known COVID-19.
The hope is that computers will be able to scan the research more quickly than humans and uncover findings that humans may miss, US Chief Technology Officer Michael Kratsios, who works in the White House, told reporters on a conference call.
Machine Learning, a form of artificial intelligence in which software is designed to detect patterns in data on its own, is already used in healthcare and other industries to develop summaries from large amounts of text. But before it can effectively draw conclusions, machine learning software sometimes needs to analyse millions of similar content items.
Only about 13,000 of the coronavirus articles are included in the new database in their entirety in a format that makes it easy for software to analyse, Kratsios said. The database contains partial text, such as summaries, of the other 16,000 articles.
The database and researchers' submissions are being hosted on Google's Kaggle service, a popular tool for organising machine learning competitions online.
Officials with the US government along with American tech companies and research institutions said they rushed in the last few days to get legal permission from academic publishing companies and others to make the coronavirus papers widely available.
Microsoft's chief scientific officer, Eric Horvitz, whose company's software helped curate coronavirus-related papers, told reporters the goal is to "empower scientists and empower (health) care practitioners to come to solutions more quickly."
"It's really all hands on deck for this," he said.

Nokia 2.2 Android 10 Update Starts Rolling Out, Today HMD Global Announces

MindBEE
             Nokia 2.2 Android 10 Update







Nokia 2.2 is finally receiving its Android 10 update. HMD Global, the parent company of Nokia has started rolling out of the Android 10 update from Tuesday. This means that the Nokia 2.2 with the latest operating system will enjoy features such as enhanced privacy and security settings, Dark mode, smart reply, Focus mode, etc. Other Nokia smartphones will also get the Android 10 update in the upcoming months, as per the revised roadmap shared by HMD Global on last week, revised thanks to the impact of coronavirus.


Built for everyday - powered by the Mediatek Helio A22 quad-core processor, The Nokia 2.2 features 3GB of RAM with 32GB of Onboard storage and an SD slot supporting up to 400GB of extra storage.
...
List Price:$139.00
Price:$129.99 & FREE Shipping
You Save:$9.01 (6%)

To recall, the Nokia 2.2 that was released last year came with Android 9 Pie. Since its launch, the smartphone has witnessed several price cuts.

Click here High Quality Instagram Marketing Training Course for FREE!

Computer vision:A step towards future.

MindBEE
                   What is computer vision
Computer vision:A step towards future. 
In simple word Computer vision is an interdisciplinary scientific field that deals with how computers can gain high-level understanding from digital images or videos. From the perspective of engineering, it seeks to understand and automate tasks that the human visual system can do.

It is related to Image processingImage processing is mainly focused on processing the raw input images to enhance them or preparing them to do other tasks. Computer vision is focused on extracting information from the input images or videos to have a proper understanding of them to predict the visual input like human brain.

What is computer vision used for
      Computer vision, an AI technology that allows computers to understand and label images, is now used in convenience stores, driverless car testing, daily medical diagnostics, and in monitoring the health of crops and livestock.

This study proposed the vision-based system which remotely measures dynamic displacement of bridges in real-time using digital image processing techniques. ... The digital video camera combined with a telescopic device takes a motion picture of the target installed on a measurement location.

How does computer vision work? 

Computer vision works by making analyzing the different components of the image. A simple example can be finding the edges in an image. ... It is a higher level of image processing where the input is an image and the output is not an image, but a interpretation of the image
Computer vision:A step towards future.
Technically, a computer doesn't see an image, but a matrix of pixels, each of which has three components: red, green and blue. Therefore, a 1,000-pixel image for us will have 3,000 pixels for a computer. It will then assign a value, or intensity, to each of those 3,000 pixels.

Future of computer vision
                     Computer vision. ... Computer vision tasks include methods for acquiring, processing, analyzing and understanding digital images, and extraction of high-dimensional data from the real world in order to produce numerical or symbolic information in the forms of decisions.

Our Team

  • Sajal BITianDiploma in ECE
  • More