Pages

Thursday, 4 January 2018

Making Visualforce work in Lightning

Making Visualforce work in Lightning

At Dreamforce 2017, we told you a lot about what you can do with Einstein, IoT, and Lightning. And many of you have gone through the awesome Lightning Consultations to learn how to migrate your org to Lightning. In case you were not able to get hands-on with those experts, this guide walks you through steps to migrate your existing org to Lightning. There are a lot of steps to migrate, so we focus on just one major area: Migrating Visualforce and Apex using the Lightning Experience Visualforce Report.

What is wrong with your Visualforce?

The goal of the Visualforce Report is to highlight key things in your Visualforce pages that either aren’t supported in Lightning Experience or can break things in general for the user. There are common things Visualforce developers have done in the past that may not work once Visualforce is being displayed within a page rendered by Lightning components. These include things like iframes, directly linking to pages using record IDs, and usage of the sforce.one library.

Running the report


The Visualforce Report is a hidden gem in the Lightning Experience Readiness Report. You access the report by running the Lightning Readiness Report like normal, and then receiving the report in your email. If you accidentally deleted that email, not to worry, all of your previously run reports can be found in the Files tab of your org.

Update your code

The report provides you with lots of information on the first few pages. The report is constantly evolving, so it’s always good to read the whole thing, even if it’s your 10,000th run of the report. Keep in mind that the report evaluates only pages that have been accessed in the past 90 days. It only provides information about what is commonly used. If you have a page that’s only run during a specific time of year, it can have issues and not display it in the report. I recently ran the report on an old org that’s still in classic to get some examples of errors that you can run into.

Error 1 – showHeader


You can see here that the report is pretty self-explanatory, this particular Visualforce page needs just a bit of editing. We have a few options. We can leave the page as is, and when you’re in Lightning Experience you can still see the header, but in Classic the page behaves as expected. Your other option is to look at a wider use case and determine if showHeader needs to be false, and just remove it from your Visualforce page. With just that bit of quick thought, you can now move on; your page is ready for Lightning Experience.

Error 2 – window.


This issue is a bit trickier. Instead of using window., you can potentially replace the functionality with something from the Lightning Components Developer Guide. Often, we use window. to let us navigate, so a potential option is to use force:navigateToURL instead. Your solution is dependent on your problem, so be sure to test and don’t be afraid to do a greater refactor.

What comes next?

This report is always changing and updating with each release. It’s recommended that you run the report at least once every release to make sure you’re up to date.

Rolling your own: A Node.js-based Visualforce Analyzer

The Visualforce portion of the Readiness Report was prototyped in Node.js before being converted to Java and rolled into the official product. While the Readiness Report should be the first step for any Lightning migration, some developers can find it useful to have a tool they can hack to their own true heart’s content. If that is your cup of tea, then head over to the repo for cloning. Running the current project sets up a simple website where you can log into your org and see something similar to the Readiness Report. However, any JSON file in the rules directory is picked up by the app and run against your Visualforce and related JavaScript. The rules files are basically setting up inclusions and exclusions for string matching, making it easy to find pages that do or don’t reference specific things.

Related Links:
Documentation: Kick Off the Lightning Experience Readiness Check:
https://help.salesforce.com/articleView?id=lex_readiness_check_kick_off.htm&type=0
Documentation: Review and Act on the Lightning Experience Readiness Report
https://help.salesforce.com/articleView?id=lex_readiness_check_report_rollout_strategy.htm&type=5

Reference Link:
https://developer.salesforce.com/blogs/2018/01/making-visualforce-work-lightning.html 

Monday, 5 June 2017

Image-Based Search with Einstein Vision and Lightning Components


Image-Based Search with Einstein Vision and Lightning Components

Einstein Vision enables you to bring the power of image recognition to your application. As an example, this blog post describes how to use Einstein Vision to add image-based search to the DreamHouse sample application.
In the DreamHouse sample application, prospective home buyers know the type of house they like when they see it; but they may not know how that type of house is called (Victorian, colonial, Greek revival, and so on). This can limit their ability to search for houses based on their preferred architectural style. Using image-based search, they can search for houses based on a picture of a house they like.
To configure and use image-based search in your own instance of the DreamHouse application, follow these steps.

Step 1: Install the DreamHouse sample application

Follow these instructions to install the DreamHouse sample application. If you have a previous version of DreamHouse, make sure you install the new version (1.8 or higher).

Step 2: Create an Einstein platform account

If you already have an Einstein Vision account, skip this step and go straight to step 3.
  1. Go to the Einstein Vision signup site.
  2. Click Sign Up Using Salesforce.
  3. Enter your username and password and click Log In, then click Allow. You can authenticate with any org that you are a registered user of. To keep it simple, use your credentials from your DreamHouse org.
  4. On the activation page, click Download Key. A file named einstein_platform.pem is saved on your local file system.

Step 3: Upload your key file

  1. In the DreamHouse application, click the Files tab (depending on your screen size, it may be under the More option).
  2. Click Upload File.
  3. Select the einstein_platform.pem file you downloaded in step 2, and click Open. The einstein_platform file should appear in the list of files.
  4. In Setup, type Custom Settings in the Quick Find Box and click the Custom Settings link.
  5. Click the Manage link next to DreamHouse, and click the first New button to create default settings for the org.
  6. For Einstein Vision Email, enter the email address of the Salesforce user you used when creating the Einstein Vision key in step 2. You can leave all the other fields empty.


Step 4: Create and train an Einstein Vision dataset

  1. In the DreamHouse application, click the Einstein Vision tab. The Einstein Vision tab contains a custom component (EinsteinDashboard) that helps you manage your Einstein Vision datasets.
  2. Keep the default URL to the houses.zip file, and click the Create Dataset button. A new tile should appear for the houses dataset. houses.zip contains sample house pictures used to train the model. The house pictures are organized in three directories that Einstein Vision uses as labels: Colonial, Contemporary, and Victorian. Feel free to download and uncompress houses.zip to take a look at the directory structure.
  3. Click the Refresh Datasets button until you see the labels in the house dataset (ColonialContemporary, and Victorian). Note that there are 15 sample pictures by label. This is enough for this sample application, but in real life, you should add more sample images to increase the model accuracy.
  4. Click the Train button.
  5. Click the Models tab.
  6. Click the Refresh Models button several times until the progress column indicates 100% (the training process can take a few minutes).
  7. Select the model ID and copy it to your clipboard using Command-C (Mac) or Ctrl-C (Windows).


Step 5: Use the model to perform image-based search

In the Lightning Experience

  1. In the DreamHouse app, click the Property Explorer tab.
  2. Click the gear icon (upper right corner), then click Edit Page to open App Builder.
  3. Select the Image-Based Search custom component, and add it to the right sidebar of the page. Paste the model ID in the component property panel.
  4. Click Save and click Back.
  5. Drag an image of a colonial, Victorian, or contemporary house in the drop area of the Image-Based Search component. The component submits the image to the Einstein Vision service, which returns house type predictions. The PropertyTileList component then searches for houses matching the predicted house type.

Using the bot

DreamHouse comes with a bot custom component (in the utility bar) that lets you ask questions formulated in natural language in an instant messaging–like interface. For example, you can ask: “3 bedrooms in Boston” or just “find house.” Read this post to learn more about the bot custom component. A new bot command has been added to DreamHouse to support image-based search. Before you can use that command, specify your own model ID in the command handler:
  1. In the developer console, open the HandlerImageBasedSearch Apex class.
  2. Provide the value of your model ID for the modelId string at the top of the file.
  3. Save the file.

To use image-based search in the bot:
  1. Type “search houses like this” in the bot input field.
  2. Drag an image of a colonial, Victorian, or contemporary house in the drop area of the bot component.

In Salesforce1

A Lightning page named House Explorer is available to provide image-based search in the Salesforce 1 app. Once again, all you have to do is configure the Image-Based Search component on that page with your own model ID:
  1. In Setup, type App Builder in the Quick Find box and click the Lightning App Builder link.
  2. Click Edit for the House Explorer page.
  3. Click the Image-Based Search component.
  4. Paste your model ID in the component property panel.
To add the House Explorer page to the Salesforce1 mobile navigation:
  1. In Setup, type Navigation in the Quick Find box and click the Salesforce1 Navigation link.
  2. Add House Explorer to the Selected box.
  3. Click Save.
To use image-based search in Salesforce1:
  1. Tap House Explorer in the menu.
  2. Tap Upload Files.
  3. Select a picture of a colonial, Victorian, or contemporary house in your image library.


Wrapping up

Applications are getting smarter. With Einstein Vision, you can utilize image recognition to build artificial intelligence–powered apps fast. This blog post describes a simple visual search example, but the possibilities are endless. We can’t wait to see how you’ll bring the power of image recognition to your applications.

Resources


Reference:


Monday, 25 January 2016

The Apex Interactive Debugger

Salesforce.com pleased to announce that the Apex Interactive Debugger is now generally available!

Sweet. What does it do?
The interactive debugger is an addition to our existing debugging suite. It does exactly what you’d expect an interactive debugging tool to do. It allows you to set breakpoints throughout your code, in the cloud, on our multitenant architecture. It stops requests at these breakpoints. When it stops, you can inspect the transaction state. You have full stack information. You have full variable information. You can control the transaction, stepping into and out, and running to the next breakpoint.
Didn’t you demo this a year ago? What took so long?
Despite what I said just now, this is not your average interactive debugging tool. Salesforce is a cloud-based multitenant system, which presents multiple challenges. The threads you want to inspect are stopped on a different computer than the one you are using to debug. Routing your subsequent step/run requests to the appropriate stopped thread, on the appropriate app server, is complicated. It’s complicated enough that we are the first to ever try such a thing! We also had to ensure service protection, meaning automated monitoring tools and “panic button” capability in case anything ever goes squirrely.
Thank you for keeping us safe. How do I use this tool?
Just use the standard debugging tools in the Eclipse IDE. We adhered to the “principle of least surprise,” so the buttons and commands in Eclipse will do the same thing they’d do if you were debugging a Java application locally. The step buttons step. The variable pane shows the variables. Double-clicking the gutter sends us breakpoints. If you have used a debugger before, it will feel very similar.
Eclipse provided us with a fully-featured debugging UI, which allowed us to focus on the underlying connectivity instead of the user experience. And we have a Force.com IDE plugin already, which you probably know about (and which you probably have strong opinions about (hopefully good (and if not (or if so), please join the open-source development project!))).
Will I be able to debug in other IDEs?
Yes, but not just yet. There are three parts to the debugger: the cloud-based multitenant routing system previously mentioned; the client application where you interact with and operate the debugging process (currently only Eclipse); and the API to pass information between that client and the application server where your thread is stopped. My friends who have built an IDE plug-in are frequently asking when we can make the API part public. (Note: building a Force.com IDE plug-in is an ideal way to become my friend. Highly recommended.) They will have access soon and will hopefully incorporate the interactive debugging capability to their IDE tools.
I love the Developer Console. Can I debug there?
Aww, shucks. *blush* Alas, you will not be able to use the interactive debugger in the Developer Console. That would require building out a brand-new UI, which we do not plan to do.
For now, the team is focused on finishing out the debugger functionality. (Click to see a live construction cam of the team building the debugger!) We are designing ways to set method breakpoints and exception breakpoints. We have a plan to offer statement evaluation (“eval”, to those in-the-know), which would enable conditional breakpoints.
We are also focused on capacity management. Capacity is the critical target for us, one which we’re constantly monitoring. We want as many of you to be able to use this as possible. We’ll be releasing a few new capabilities in the coming release to reduce unnecessary use, which should permit more legitimate use. As an example, you’ll be able to whitelist different users or entry points, so that breakpoints in common code are ignored when hit by other users or cron jobs you’re not trying to debug.
Tell me how this capacity management works….
One part of managing capacity is the fact that this is an add-on product that must be purchased, and there are a limited number of spaces we can sell.
I heard you were now charging for all debugging. That’s CRUEL!
It’s also FALSE.
We are only charging for the interactive debugger. Debug logs and the nifty Developer Console parsed log viewer, checkpoints, and heap dumps are still free as they have always been.
I’m a retired developer myself, so I know that efficiency and frugality are part of the mindset. I assure you, though, that you are very happy that we are charging for this. The alternative was the common Salesforce pattern in managing multitenant capacity: LIMITS.
Ah. Good point. I don’t know which I like less: limits, or paying for things.
We asked ourselves that question, too. We determined that limits would have made this particular feature difficult to use. None of the types of limits that work for the rest of our platform work with the particular usage patterns of an interactive debugger.
Typical transaction timeouts wouldn’t work. Imagine if you had two minutes for a stopped thread for debugging. You’d have a clock counting down in your head every time you used the debugger. You’d be rushing to get stuff done, and not free to sit and stare at the code and the transaction state and try to unwind how in the name of all things holy did sCountryCode get set to ‘Never’?!?
Typical usage counting, like API calls per day, also doesn’t work for debugging. You’d accidentally set a breakpoint in the wrong spot, and it would get hit by a bunch of threads, and you’d be out of “credits” for the day.
Due to the finite capacity, we’d need a queue to get going. When you stop at a breakpoint, you consume a thread and a database connection. Normal transactions also consume one of each, but they’re finished within milliseconds. Your debugging threads will live for several minutes, which means they have an outsized impact on capacity. In effect, stopping at a breakpoint reduces our service capacity by one thread/connection.
We can absorb some amount of this capacity reduction without impacting service quality. Once we’ve reached that amount, we can’t let any more debugging sessions in the door. So you’d be waiting for an opportunity rather than solving problems.
OK, I see that charging some amount keeps the service available.  How does it work?
The unit being sold is not per-user, and it’s not per-org. You can purchase debugger sessions, which are shared across all users and all sandboxes from your parent org.
If your organization has purchased debugging capability, all of the sandboxes spawned from it are enabled for debugging. The number of sessions you purchase represents the number of your sandboxes that can be debugging at the same time.
Think of this like a phone line in your house, from back when you would have had a phone line in your house. There were many telephones, and they could all make calls, assuming nobody was on the line. However, if Mom needed to call someone, she had to wait until you were done, because you were having a VERY IMPORTANT CONVERSATION and you would be off IN A MINUTE and STOP SHOUTING I’M ON THE PHONE and then you’d stretch the cord around the dresser and all the way into your closet so she’d stop bothering you and
What were we talking about?
Oh, right, sorry about that. The debugger sessions you purchase are like your phone line, which only one of you could use at one time. If it got ugly, you paid for a second phone line.
We are going to provide visibility in the parent org as to which sandbox(es) are engaged at any point in time, along with the user doing the debugging. This will allow the admin to contact them and ask them nicely to get off the phone. Er, debugger. We’ll also have a less-polite “Kill Session” button in case someone goes rogue.
Mom would have loved that button.
Yes, she would have. Fortunately for me, I wasn’t building technology back then, only abusing it.
What other capacity work are you’re doing?
(Warning: the following paragraphs may contain forward-looking statements. If there are small children in the room, you may want to ask them to leave, lest they make purchasing decisions based on anything but currently available software.)
I mentioned the white-list idea before. There will also be an “are you still there” pop-up if you are idle for a few minutes, similar to the one you get at your online banking website. This will let us terminate sessions for people who have stopped a thread and stopped paying attention, which will free up threads for you (because you’d never do such a thing!). We’re also tweaking our load-balancing algorithms to attempt to maximize how many threads we can stop at the same time.
If we get all this right, we ought to have capacity for every “serious” debugging session request. We won’t be able to scale it to the looky-loo use case, though, so we’ll always probably have a nominal charge to restrict use to those of you who need it. (If you’re reading this, you are in that group.)
You said “sandbox” a lot of times. What about debugging my production org?
Currently, interactive debugging is only available in sandbox orgs. This has to do with the number of threads taken out of the capacity pool that we can absorb without impacting the service. In sandbox pods, that number is sufficient to offer an interactive debugging service. In production pods, it’s pretty much zero.
We are working on a way to offer occasional debugging sessions in production, but we must ensure that such a thing will not impact production system operation.
How will I debug in my DE org?
You will not be able to debug in current DE orgs, since these are not on sandbox hardware. You’ll need to use the sandbox orgs that are a part of your company’s org (or your client’s org, if you’re doing project work).
Does this mean ISVs cannot use the debugger?
Not at all! ISVs do some of the most debug-worthy coding on the platform, so we made sure they could use the product.
Hey! What are you trying to say about ISV code?
I’m just saying it’s complex, that’s all! If an ISV purchases the debugger, they will get sandbox environments provisioned, where they can develop and debug and share sessions like customer orgs.
I’ve wanted a sandbox on my DE org for years, now I can finally get one?!
Yes, if you purchase the debugger.
How will ISVs debug their code in subscriber orgs?
Ah, you have observed that what I’ve described allows ISVs to debug their application in isolation, but not as a part of a subscriber org. Good catch.
(Warning: more forward-looking statements.)
We are going to allow debugging to occur in a way similar to the current “login-as” functionality. When subscribers debug, ISV-managed code and variables will be removed from the variables and stack information just like they have been in debug logs forever. ISVs can request permission to log in to the subscriber org, which will unblock the managed stack and variables when running the debugger. This is similar to how debug logs are made available today.


Tuesday, 3 November 2015

The Easy Way To Build Apps That Look Like The New Salesforce

Salesforce recently announced the new Salesforce Lightning Experience. It’s a completely re-imagined user experience designed to maximize sales rep productivity, and it is backed by a robust and flexible platform. While we are incredibly excited about this latest release of Salesforce, we are also very excited by what grew out of the Salesforce Lightning effort, specifically the Lightning Design System.


Salesforce has always been committed to customer success and that’s why we are the world’s leading CRM solution. We got here by listening to and responding to our customer’s needs. As a result, our teams were often optimized to meet those needs within specific product features and areas.

But in order for us to succeed in re-inventing our experience, we needed to look across product areas and business units. We needed to break the mold and create new, best-in-class experiences that could scale across the entire organization and beyond.

A platform for designing at scale


It’s important for my team to always remember that Salesforce is both an app and a platform. That means our approach to design needs to take into account not only what the best app experience is for a particular problem, but also how that experience can scale and be customized to solve other problems we haven’t even anticipated. This is where the Salesforce partner and developer community really shines.
With the Design System, we figured out a way to make it as easy as possible for our development teams to create modern, usable, scalable, accessible, and beautiful user experiences. The Design System is how we communicated effectively and created consistency around the new Lightning Experience. Instead of communicating our design through static documentation, we used living code.
The Design System is based on our four design principles:
Clarity
Eliminate ambiguity. Enable people to see, understand, and act with confidence.

Efficiency
Streamline and optimize workflows. Intelligently anticipate needs to help people work better, smarter, and faster.


Consistency
Create familiarity and strengthen intuition by applying the same solution to the same problem.
 

Beauty
Demonstrate respect for people’s time and attention through thoughtful and elegant craftsmanship.
 

All four of these design principles have been translated into extensible Cascading Style Sheets (CSS). And here’s the best part. We’re taking all this expertise and all the tools we used internally, and opening them up through our Design System so you can also have access to what we have inside Salesforce.

You can think of the Design System as a how-to manual that enables developers and partners to build beautiful components that work with Salesforce across devices and operating systems. It’s a living style guide and best practices, married to code.

The Design System will only get better


With the Design System, we’re making it easy for partners and ISVs to match Salesforce’s look and feel. What you’ll see is our UX broken down into easily consumable and reusable components. Those components can be taken and applied to your business cases to build your own custom solutions.
While style guides look backward and are out of date as soon as they are released, design systems look forward. When we update our UX, there is no need to worry about opening code and changing values by hand. Partners and ISVs can just point to the Design System. It is always current.
The Salesforce Lightning Design System is laying the foundation for continually improving our design and our processes. This is a different way for us to operate and no one else in the enterprise space is doing this. We think it’s going to be awesome, but we’re also learning and improving. We really want to know what works and what doesn’t work for you, so please give us feedback. The more feedback we get, the better it will become.
Create the world’s best enterprise app experiences with the Salesforce Lightning Design System



Reference: https://www.salesforce.com/blog/2015/09/build-apps-look-like-new-salesforce.html

Monday, 12 October 2015

'Performance cloning' techniques to boost computer chip memory systems design.


North Carolina State University researchers have developed software using two new techniques to help computer chip designers improve memory systems. The techniques rely on "performance cloning," which can assess the behavior of software without compromising privileged data or proprietary computer code.
Computer chip manufacturers try to design their chips to provide the best possible performance. But to find the most effective designs, manufacturers need to know what sort of software their clients will be using.
"For example, programs that model protein folding use a lot of computing power, but very little data -- so manufacturers know to design chips with lots of central processing units (CPUs), but significantly less memory storage than would be found on other chips," says Yan Solihin, an associate professor of computer engineering at NC State and an author of two papers describing the new techniques.
However, many large customers -- from major corporations to Wall Street firms -- don't want to share their code with outsiders. And that makes it tough for chip manufacturers to develop the best possible chip designs.
One way to address this problem is through performance cloning. The concept behind performance cloning is that a chip manufacturer would give profiler software to a client. The client would use the profiler to assess its proprietary software, and the profiler would then generate a statistical report on the proprietary software's performance. That report could be given to the chip manufacturer without compromising the client's data or code.
The profiler report would then be fed into generator software, which can develop a synthetic program that mimics the performance characteristics of the client's software. This synthetic program would then serve as the basis for designing chips that will better meet the client's needs.
Previous work at Ghent University and the University of Texas at Austin has used performance cloning to address issues related to CPU design -- but those initiatives did not focus on memory systems, which are an important element of overall chip design.
Researchers have now developed software using two new techniques to help optimize memory systems.
The first technique, called MEMST (Memory EMulation using Stochastic Traces), assesses memory in a synthetic program by focusing on the amount of memory a program uses, the location of the data being retrieved and the pattern of retrieval.
For example, MEMST looks at how often a program retrieves data from the same location in a short period of time, and at how likely a program is to retrieve data from a location that is near other data that's been retrieved recently. Both of these variables affect how quickly the program can retrieve data.
The second technique, called MeToo, focuses on memory timing behavior -- how often the program retrieves data and whether the program has periods in which it makes many memory requests in a short time. Memory timing behavior can have a significant impact on how a system's memory system is designed.
For example, if you think of memory requests as cars, you don't want to have a traffic jam -- so you may want to be sure there are enough lanes for the traffic. These traffic lanes equate to memory bandwidth; the broader the bandwidth, the more lanes there are.
"Both MEMST and MeToo are useful for chip designers, particularly for designers who work on memory components, such as DRAM, memory controllers and memory buses," Solihin says.
The new techniques expand on previous work done by Solihin that used performance cloning to look at cache memory.
"Our next step is to take MEMST and MeToo, as well as our work on cache memory, and develop an integrated program that we can commercialize," says Solihin, author of the forthcoming Fundamentals of Parallel Multicore Architecture, which addresses memory hierarchy design.


Source: http://www.sciencedaily.com/releases/2015/09/150930140337.htm

Friday, 4 September 2015

Pointing the Way: 3D Computer Cursors Could Navigate Virtual Worlds




Forget everything you thought you knew about computer cursors. Researchers have come up with a way to turn cursors into a tool that can navigate around 3D space.
Conventional pointers that are controlled with a trackpad and show up as a tiny arrow on a screen will soon be outdated, according to scientists at the University of Montreal in Canada. They have created a way to turn smartphones, tablets or anything with an interactive surface, into a translucent so-called "controlling plane" to select and manipulate objects in a 3D world.
This futuristic technology could play an integral role in how virtual reality software responds to how users move in real life.

Traditionally, a mouse and a cursor are confined to a screen "like a jail," said study lead researcher Tomás Dorta, a professor at the University of Montreal's School of Design. "It's the kind of interaction which has to evolve," he told Live Science.
The high-tech cursor developed by Dorta and his colleagues can select objects in the 3D virtual world. Instead of clicking on icons to select things with a trackpad or mouse, the screen of a smartphone or tablet becomes the trackpad itself and produces a translucent plane on the screen that responds to all kinds of movements.
"If I have this cup," Dorta said, picking up a coffee mug. "When it's selected, it's like I have it in my hand."
The controlling plane appears on the screen, which can enlarge or decrease an object when the user pinches or expands it using their fingers. It twists and tilts when the device does and users can also copy and paste with it. In tests so far, the researchers were able to select chairs and tables in a building and organs inside a large, to-scaleskeleton image on the screen.
At the moment, the cursor technology can be demonstrated using Hyve3D technology, which is an immersive design system that visualizes 3D sketches on a screen in front of the user. The screen is also collaborative, so people can link their devices to the same software and work together on a project. Contributors can look at the same space from different angles using their various devices, each accessing and manipulating it separately. 
"You can navigate together … working together in the same computer," Dorta said. "Everything 3D, everything collaborative, because the 3D cursor becomes our avatar."
Dorta said potential uses for a collaborative, 3D technology range from interior and architectural design to the development of virtual reality computer games. If phones or a tablets can become 3D cursors, then the ultimate goal is for users to access the same program or desktop as their colleagues, wherever they are, he said.
Eventually, this type of cursor technology could be available for operating systems like Windows and Mac OS, Dorta said. This could enable people to access each other’s desktops and see the files and applications on there in 3D, rather than through a window. Dorta thinks people are currently restricted by the window format on computers, and a 3D version of a desktop would make people’s computer interactions easier. Sending a file also won't require a USB or an online folder — you would just need to swoop at it with your phone to "grab" it and it'll be saved to your device, Dorta said.

The traditional computer mouse was invented in 1964, Dorta said, and it's time for something new. The researchers were inspired by the way people interact with the world, and how a computer can seem limited with its 2D restrictions. 

                                  

"Let's do something in 3D, because we are in 3D," Dorta said.
He added that 3D cursors could open up new possibilities in the world of computing. For one, application windows won't need to "stack" or hide on top of each other on a screen because the cursor could move around in 3D space, Dorta said.
While people have become accustomed to desktops and laptops that present information in a 2D landscape, Dorta said, next-generation users will likely experience a different way of interacting with computers. The researchers have noticed that younger users already have more of a knack for using the 3D technology than adults who are "already contaminated with the cursor."
"When we see kids using the 3D cursor, they don't take time to learn," Dorta said. "They do it quickly because it is like mastering the movement of a hand."
Dorta said innovative cursor technologies will continue to evolve to keep up with ever more virtual lives. "It's not only a little arrow to click," he said. "We are 46 years later. We can do better, I think."


Resoruce: http://www.livescience.com/51997-3d-computer-cursors-navigate-virtual-reality.html