微软家bing hiring event 面经

1st round: given 2 n-ary tree, find all common treenode, which must be have same value and same children treenode
2nd round: merge k sorted list
3rd round: given a string, reorder the string by every letter frequency
4th round: design a car remote system.

Questions on string traversal, arrays + math were common in my interview.

I was asked questions related to DP and Backtracking. How to test your code DP, Backtracking and a question on palindromes.

Print out all the nodes in a binary in order without recursion.

Return the longest palindromic substring.

Determine if a linked list is circular (give upper and lower on runtime bounds given the choices made/optimal ratio)

https://www.glassdoor.com/Interview/Given-a-list-of-n-unsorted-key-value-tuples-which-are-too-large-to-fit-in-memory-return-the-k-tuples-with-the-greatest-val-QTN_1861048.htm

The coding problem he asked me was to find the next larger element in a BST.

 Given a tree of order n which is neither complete or a   search tree, write a function to construct 
 a new tree of order m in-place. Memory usage is bound to 2 times the size of the tree. The new tree 
 must be complete, and a node A cannot be a child of another node B in the new tree if A was an ancestor 
 of B in the old tree. 


1
▼
Use two queues A and B. Perform a BFS on the tree using in-order traversal, storing child nodes to both A 
and B but popping nodes only from A while clearing each node's list of children. Then, starting from the 
original root, fill in the list of children for each node in the order specified by B. As a node is marked
 as a child of another node, add it to A. Continue popping nodes from A and repeating this process until 
 either A or B is empty.
Given two very large timestamped sorted log files that do   not fit in memory (possibly on different machines),
 merge them in timestamp order. Provide some test cases. 

 Perform a chunked mergesort. Since both files are already sorted, only the merge step is necessary. I had 
 to demonstrate to my interviewer how the chunking worked (ex. given 64GB RAM, load 16GB chunks at a time from 
 each file and flush to disk every time 32GB of logs were merged).
Some test cases were splitting one large file into two in several different ways and checking whether the output
 was the same, merging one empty or nonexistent file with another, merging files in invalid formats (and how to 
 handle such a situation), merging files where each log entry may be greater than the size of memory, etc.
 Given a list of n unsorted key-value tuples which are too   large to fit in memory, return the k tuples 
 with the greatest value where n is several orders of magnitude greater than k. This list may be dispersed 
 across multiple machines. 

Have each machine maintain a min-heap of size k and keep the k largest tuples from its part of the list. 
Then merge the heaps together into one heap of size k.

results matching ""

    No results matching ""